Test Report: KVM_Linux_crio 17967

                    
                      10ecd0aeb1ec35670d13066c60edb6e287060cba:2024-01-16:32725
                    
                

Test fail (22/309)

x
+
TestAddons/parallel/Ingress (152.86s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-321835 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-321835 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-321835 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [d2c94fc6-73dd-4d9a-97ca-3e782e24db68] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [d2c94fc6-73dd-4d9a-97ca-3e782e24db68] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.005624481s
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-321835 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-321835 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m9.538730517s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-321835 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-321835 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.11
addons_test.go:306: (dbg) Run:  out/minikube-linux-amd64 -p addons-321835 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-linux-amd64 -p addons-321835 addons disable ingress-dns --alsologtostderr -v=1: (1.628317545s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-amd64 -p addons-321835 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-amd64 -p addons-321835 addons disable ingress --alsologtostderr -v=1: (7.922135032s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-321835 -n addons-321835
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-321835 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-321835 logs -n 25: (1.512502738s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-423577                                                                     | download-only-423577 | jenkins | v1.32.0 | 16 Jan 24 02:00 UTC | 16 Jan 24 02:00 UTC |
	| delete  | -p download-only-281930                                                                     | download-only-281930 | jenkins | v1.32.0 | 16 Jan 24 02:00 UTC | 16 Jan 24 02:00 UTC |
	| delete  | -p download-only-248523                                                                     | download-only-248523 | jenkins | v1.32.0 | 16 Jan 24 02:00 UTC | 16 Jan 24 02:00 UTC |
	| delete  | -p download-only-423577                                                                     | download-only-423577 | jenkins | v1.32.0 | 16 Jan 24 02:00 UTC | 16 Jan 24 02:00 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-558610 | jenkins | v1.32.0 | 16 Jan 24 02:00 UTC |                     |
	|         | binary-mirror-558610                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:36451                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-558610                                                                     | binary-mirror-558610 | jenkins | v1.32.0 | 16 Jan 24 02:00 UTC | 16 Jan 24 02:00 UTC |
	| addons  | enable dashboard -p                                                                         | addons-321835        | jenkins | v1.32.0 | 16 Jan 24 02:00 UTC |                     |
	|         | addons-321835                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-321835        | jenkins | v1.32.0 | 16 Jan 24 02:00 UTC |                     |
	|         | addons-321835                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-321835 --wait=true                                                                | addons-321835        | jenkins | v1.32.0 | 16 Jan 24 02:00 UTC | 16 Jan 24 02:03 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --driver=kvm2                                                                 |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                                   |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | addons-321835 addons                                                                        | addons-321835        | jenkins | v1.32.0 | 16 Jan 24 02:03 UTC | 16 Jan 24 02:03 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-321835 ssh cat                                                                       | addons-321835        | jenkins | v1.32.0 | 16 Jan 24 02:03 UTC | 16 Jan 24 02:03 UTC |
	|         | /opt/local-path-provisioner/pvc-4b748b59-8a26-4a5c-b1da-42b4fce585de_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-321835 addons disable                                                                | addons-321835        | jenkins | v1.32.0 | 16 Jan 24 02:03 UTC | 16 Jan 24 02:04 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-321835        | jenkins | v1.32.0 | 16 Jan 24 02:03 UTC | 16 Jan 24 02:03 UTC |
	|         | addons-321835                                                                               |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-321835        | jenkins | v1.32.0 | 16 Jan 24 02:03 UTC | 16 Jan 24 02:03 UTC |
	|         | -p addons-321835                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-321835 ip                                                                            | addons-321835        | jenkins | v1.32.0 | 16 Jan 24 02:03 UTC | 16 Jan 24 02:03 UTC |
	| addons  | addons-321835 addons disable                                                                | addons-321835        | jenkins | v1.32.0 | 16 Jan 24 02:03 UTC | 16 Jan 24 02:03 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-321835        | jenkins | v1.32.0 | 16 Jan 24 02:03 UTC | 16 Jan 24 02:03 UTC |
	|         | addons-321835                                                                               |                      |         |         |                     |                     |
	| ssh     | addons-321835 ssh curl -s                                                                   | addons-321835        | jenkins | v1.32.0 | 16 Jan 24 02:03 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-321835 addons disable                                                                | addons-321835        | jenkins | v1.32.0 | 16 Jan 24 02:03 UTC | 16 Jan 24 02:03 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-321835        | jenkins | v1.32.0 | 16 Jan 24 02:04 UTC | 16 Jan 24 02:04 UTC |
	|         | -p addons-321835                                                                            |                      |         |         |                     |                     |
	| addons  | addons-321835 addons                                                                        | addons-321835        | jenkins | v1.32.0 | 16 Jan 24 02:04 UTC | 16 Jan 24 02:04 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-321835 addons                                                                        | addons-321835        | jenkins | v1.32.0 | 16 Jan 24 02:04 UTC | 16 Jan 24 02:04 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-321835 ip                                                                            | addons-321835        | jenkins | v1.32.0 | 16 Jan 24 02:06 UTC | 16 Jan 24 02:06 UTC |
	| addons  | addons-321835 addons disable                                                                | addons-321835        | jenkins | v1.32.0 | 16 Jan 24 02:06 UTC | 16 Jan 24 02:06 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-321835 addons disable                                                                | addons-321835        | jenkins | v1.32.0 | 16 Jan 24 02:06 UTC | 16 Jan 24 02:06 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/16 02:00:40
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0116 02:00:40.251118  979198 out.go:296] Setting OutFile to fd 1 ...
	I0116 02:00:40.251289  979198 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 02:00:40.251299  979198 out.go:309] Setting ErrFile to fd 2...
	I0116 02:00:40.251304  979198 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 02:00:40.251482  979198 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17967-971255/.minikube/bin
	I0116 02:00:40.252170  979198 out.go:303] Setting JSON to false
	I0116 02:00:40.253302  979198 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":9790,"bootTime":1705360651,"procs":280,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1048-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0116 02:00:40.253398  979198 start.go:138] virtualization: kvm guest
	I0116 02:00:40.255834  979198 out.go:177] * [addons-321835] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0116 02:00:40.258059  979198 out.go:177]   - MINIKUBE_LOCATION=17967
	I0116 02:00:40.258111  979198 notify.go:220] Checking for updates...
	I0116 02:00:40.259743  979198 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0116 02:00:40.261388  979198 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17967-971255/kubeconfig
	I0116 02:00:40.262987  979198 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17967-971255/.minikube
	I0116 02:00:40.264367  979198 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0116 02:00:40.265931  979198 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0116 02:00:40.267475  979198 driver.go:392] Setting default libvirt URI to qemu:///system
	I0116 02:00:40.301494  979198 out.go:177] * Using the kvm2 driver based on user configuration
	I0116 02:00:40.302989  979198 start.go:298] selected driver: kvm2
	I0116 02:00:40.303008  979198 start.go:902] validating driver "kvm2" against <nil>
	I0116 02:00:40.303026  979198 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0116 02:00:40.303811  979198 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0116 02:00:40.303924  979198 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17967-971255/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0116 02:00:40.320678  979198 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0116 02:00:40.320800  979198 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0116 02:00:40.321068  979198 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0116 02:00:40.321141  979198 cni.go:84] Creating CNI manager for ""
	I0116 02:00:40.321157  979198 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 02:00:40.321178  979198 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0116 02:00:40.321191  979198 start_flags.go:321] config:
	{Name:addons-321835 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-321835 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISo
cket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 02:00:40.321369  979198 iso.go:125] acquiring lock: {Name:mkc0ce0b4b435c5fb7570521b794e5982bb7bf6d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0116 02:00:40.323519  979198 out.go:177] * Starting control plane node addons-321835 in cluster addons-321835
	I0116 02:00:40.325243  979198 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0116 02:00:40.325302  979198 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17967-971255/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0116 02:00:40.325323  979198 cache.go:56] Caching tarball of preloaded images
	I0116 02:00:40.325443  979198 preload.go:174] Found /home/jenkins/minikube-integration/17967-971255/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0116 02:00:40.325457  979198 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0116 02:00:40.325883  979198 profile.go:148] Saving config to /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/addons-321835/config.json ...
	I0116 02:00:40.325920  979198 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/addons-321835/config.json: {Name:mk15e99d405b6ab549b899d4d6dac8e4f75e2615 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:00:40.326088  979198 start.go:365] acquiring machines lock for addons-321835: {Name:mk61734672272adcae1bdaf20e72828e69d219db Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0116 02:00:40.326150  979198 start.go:369] acquired machines lock for "addons-321835" in 45.813µs
	I0116 02:00:40.326176  979198 start.go:93] Provisioning new machine with config: &{Name:addons-321835 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:addons-321835 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0116 02:00:40.326268  979198 start.go:125] createHost starting for "" (driver="kvm2")
	I0116 02:00:40.328197  979198 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0116 02:00:40.328423  979198 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 02:00:40.328483  979198 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:00:40.343981  979198 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41171
	I0116 02:00:40.344493  979198 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:00:40.345134  979198 main.go:141] libmachine: Using API Version  1
	I0116 02:00:40.345161  979198 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:00:40.345549  979198 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:00:40.345744  979198 main.go:141] libmachine: (addons-321835) Calling .GetMachineName
	I0116 02:00:40.345957  979198 main.go:141] libmachine: (addons-321835) Calling .DriverName
	I0116 02:00:40.346128  979198 start.go:159] libmachine.API.Create for "addons-321835" (driver="kvm2")
	I0116 02:00:40.346167  979198 client.go:168] LocalClient.Create starting
	I0116 02:00:40.346219  979198 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca.pem
	I0116 02:00:40.483759  979198 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/cert.pem
	I0116 02:00:40.574247  979198 main.go:141] libmachine: Running pre-create checks...
	I0116 02:00:40.574281  979198 main.go:141] libmachine: (addons-321835) Calling .PreCreateCheck
	I0116 02:00:40.574861  979198 main.go:141] libmachine: (addons-321835) Calling .GetConfigRaw
	I0116 02:00:40.575418  979198 main.go:141] libmachine: Creating machine...
	I0116 02:00:40.575436  979198 main.go:141] libmachine: (addons-321835) Calling .Create
	I0116 02:00:40.575625  979198 main.go:141] libmachine: (addons-321835) Creating KVM machine...
	I0116 02:00:40.576909  979198 main.go:141] libmachine: (addons-321835) DBG | found existing default KVM network
	I0116 02:00:40.577734  979198 main.go:141] libmachine: (addons-321835) DBG | I0116 02:00:40.577568  979220 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015a40}
	I0116 02:00:40.583937  979198 main.go:141] libmachine: (addons-321835) DBG | trying to create private KVM network mk-addons-321835 192.168.39.0/24...
	I0116 02:00:40.661196  979198 main.go:141] libmachine: (addons-321835) DBG | private KVM network mk-addons-321835 192.168.39.0/24 created
	I0116 02:00:40.661235  979198 main.go:141] libmachine: (addons-321835) Setting up store path in /home/jenkins/minikube-integration/17967-971255/.minikube/machines/addons-321835 ...
	I0116 02:00:40.661251  979198 main.go:141] libmachine: (addons-321835) DBG | I0116 02:00:40.661155  979220 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17967-971255/.minikube
	I0116 02:00:40.661284  979198 main.go:141] libmachine: (addons-321835) Building disk image from file:///home/jenkins/minikube-integration/17967-971255/.minikube/cache/iso/amd64/minikube-v1.32.1-1703784139-17866-amd64.iso
	I0116 02:00:40.661370  979198 main.go:141] libmachine: (addons-321835) Downloading /home/jenkins/minikube-integration/17967-971255/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17967-971255/.minikube/cache/iso/amd64/minikube-v1.32.1-1703784139-17866-amd64.iso...
	I0116 02:00:40.899631  979198 main.go:141] libmachine: (addons-321835) DBG | I0116 02:00:40.899449  979220 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17967-971255/.minikube/machines/addons-321835/id_rsa...
	I0116 02:00:41.045884  979198 main.go:141] libmachine: (addons-321835) DBG | I0116 02:00:41.045650  979220 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17967-971255/.minikube/machines/addons-321835/addons-321835.rawdisk...
	I0116 02:00:41.045932  979198 main.go:141] libmachine: (addons-321835) DBG | Writing magic tar header
	I0116 02:00:41.045949  979198 main.go:141] libmachine: (addons-321835) DBG | Writing SSH key tar header
	I0116 02:00:41.045964  979198 main.go:141] libmachine: (addons-321835) DBG | I0116 02:00:41.045858  979220 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17967-971255/.minikube/machines/addons-321835 ...
	I0116 02:00:41.045987  979198 main.go:141] libmachine: (addons-321835) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17967-971255/.minikube/machines/addons-321835
	I0116 02:00:41.046063  979198 main.go:141] libmachine: (addons-321835) Setting executable bit set on /home/jenkins/minikube-integration/17967-971255/.minikube/machines/addons-321835 (perms=drwx------)
	I0116 02:00:41.046102  979198 main.go:141] libmachine: (addons-321835) Setting executable bit set on /home/jenkins/minikube-integration/17967-971255/.minikube/machines (perms=drwxr-xr-x)
	I0116 02:00:41.046116  979198 main.go:141] libmachine: (addons-321835) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17967-971255/.minikube/machines
	I0116 02:00:41.046131  979198 main.go:141] libmachine: (addons-321835) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17967-971255/.minikube
	I0116 02:00:41.046143  979198 main.go:141] libmachine: (addons-321835) Setting executable bit set on /home/jenkins/minikube-integration/17967-971255/.minikube (perms=drwxr-xr-x)
	I0116 02:00:41.046153  979198 main.go:141] libmachine: (addons-321835) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17967-971255
	I0116 02:00:41.046169  979198 main.go:141] libmachine: (addons-321835) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0116 02:00:41.046189  979198 main.go:141] libmachine: (addons-321835) DBG | Checking permissions on dir: /home/jenkins
	I0116 02:00:41.046205  979198 main.go:141] libmachine: (addons-321835) Setting executable bit set on /home/jenkins/minikube-integration/17967-971255 (perms=drwxrwxr-x)
	I0116 02:00:41.046214  979198 main.go:141] libmachine: (addons-321835) DBG | Checking permissions on dir: /home
	I0116 02:00:41.046224  979198 main.go:141] libmachine: (addons-321835) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0116 02:00:41.046236  979198 main.go:141] libmachine: (addons-321835) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0116 02:00:41.046244  979198 main.go:141] libmachine: (addons-321835) Creating domain...
	I0116 02:00:41.046258  979198 main.go:141] libmachine: (addons-321835) DBG | Skipping /home - not owner
	I0116 02:00:41.047475  979198 main.go:141] libmachine: (addons-321835) define libvirt domain using xml: 
	I0116 02:00:41.047514  979198 main.go:141] libmachine: (addons-321835) <domain type='kvm'>
	I0116 02:00:41.047527  979198 main.go:141] libmachine: (addons-321835)   <name>addons-321835</name>
	I0116 02:00:41.047538  979198 main.go:141] libmachine: (addons-321835)   <memory unit='MiB'>4000</memory>
	I0116 02:00:41.047554  979198 main.go:141] libmachine: (addons-321835)   <vcpu>2</vcpu>
	I0116 02:00:41.047568  979198 main.go:141] libmachine: (addons-321835)   <features>
	I0116 02:00:41.047582  979198 main.go:141] libmachine: (addons-321835)     <acpi/>
	I0116 02:00:41.047594  979198 main.go:141] libmachine: (addons-321835)     <apic/>
	I0116 02:00:41.047623  979198 main.go:141] libmachine: (addons-321835)     <pae/>
	I0116 02:00:41.047643  979198 main.go:141] libmachine: (addons-321835)     
	I0116 02:00:41.047655  979198 main.go:141] libmachine: (addons-321835)   </features>
	I0116 02:00:41.047674  979198 main.go:141] libmachine: (addons-321835)   <cpu mode='host-passthrough'>
	I0116 02:00:41.047687  979198 main.go:141] libmachine: (addons-321835)   
	I0116 02:00:41.047700  979198 main.go:141] libmachine: (addons-321835)   </cpu>
	I0116 02:00:41.047725  979198 main.go:141] libmachine: (addons-321835)   <os>
	I0116 02:00:41.047745  979198 main.go:141] libmachine: (addons-321835)     <type>hvm</type>
	I0116 02:00:41.047761  979198 main.go:141] libmachine: (addons-321835)     <boot dev='cdrom'/>
	I0116 02:00:41.047773  979198 main.go:141] libmachine: (addons-321835)     <boot dev='hd'/>
	I0116 02:00:41.047819  979198 main.go:141] libmachine: (addons-321835)     <bootmenu enable='no'/>
	I0116 02:00:41.047846  979198 main.go:141] libmachine: (addons-321835)   </os>
	I0116 02:00:41.047856  979198 main.go:141] libmachine: (addons-321835)   <devices>
	I0116 02:00:41.047865  979198 main.go:141] libmachine: (addons-321835)     <disk type='file' device='cdrom'>
	I0116 02:00:41.047882  979198 main.go:141] libmachine: (addons-321835)       <source file='/home/jenkins/minikube-integration/17967-971255/.minikube/machines/addons-321835/boot2docker.iso'/>
	I0116 02:00:41.047891  979198 main.go:141] libmachine: (addons-321835)       <target dev='hdc' bus='scsi'/>
	I0116 02:00:41.047898  979198 main.go:141] libmachine: (addons-321835)       <readonly/>
	I0116 02:00:41.047903  979198 main.go:141] libmachine: (addons-321835)     </disk>
	I0116 02:00:41.047910  979198 main.go:141] libmachine: (addons-321835)     <disk type='file' device='disk'>
	I0116 02:00:41.047920  979198 main.go:141] libmachine: (addons-321835)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0116 02:00:41.047940  979198 main.go:141] libmachine: (addons-321835)       <source file='/home/jenkins/minikube-integration/17967-971255/.minikube/machines/addons-321835/addons-321835.rawdisk'/>
	I0116 02:00:41.047948  979198 main.go:141] libmachine: (addons-321835)       <target dev='hda' bus='virtio'/>
	I0116 02:00:41.047973  979198 main.go:141] libmachine: (addons-321835)     </disk>
	I0116 02:00:41.047993  979198 main.go:141] libmachine: (addons-321835)     <interface type='network'>
	I0116 02:00:41.048001  979198 main.go:141] libmachine: (addons-321835)       <source network='mk-addons-321835'/>
	I0116 02:00:41.048023  979198 main.go:141] libmachine: (addons-321835)       <model type='virtio'/>
	I0116 02:00:41.048032  979198 main.go:141] libmachine: (addons-321835)     </interface>
	I0116 02:00:41.048037  979198 main.go:141] libmachine: (addons-321835)     <interface type='network'>
	I0116 02:00:41.048044  979198 main.go:141] libmachine: (addons-321835)       <source network='default'/>
	I0116 02:00:41.048052  979198 main.go:141] libmachine: (addons-321835)       <model type='virtio'/>
	I0116 02:00:41.048059  979198 main.go:141] libmachine: (addons-321835)     </interface>
	I0116 02:00:41.048066  979198 main.go:141] libmachine: (addons-321835)     <serial type='pty'>
	I0116 02:00:41.048076  979198 main.go:141] libmachine: (addons-321835)       <target port='0'/>
	I0116 02:00:41.048081  979198 main.go:141] libmachine: (addons-321835)     </serial>
	I0116 02:00:41.048090  979198 main.go:141] libmachine: (addons-321835)     <console type='pty'>
	I0116 02:00:41.048099  979198 main.go:141] libmachine: (addons-321835)       <target type='serial' port='0'/>
	I0116 02:00:41.048105  979198 main.go:141] libmachine: (addons-321835)     </console>
	I0116 02:00:41.048114  979198 main.go:141] libmachine: (addons-321835)     <rng model='virtio'>
	I0116 02:00:41.048151  979198 main.go:141] libmachine: (addons-321835)       <backend model='random'>/dev/random</backend>
	I0116 02:00:41.048177  979198 main.go:141] libmachine: (addons-321835)     </rng>
	I0116 02:00:41.048192  979198 main.go:141] libmachine: (addons-321835)     
	I0116 02:00:41.048203  979198 main.go:141] libmachine: (addons-321835)     
	I0116 02:00:41.048214  979198 main.go:141] libmachine: (addons-321835)   </devices>
	I0116 02:00:41.048220  979198 main.go:141] libmachine: (addons-321835) </domain>
	I0116 02:00:41.048237  979198 main.go:141] libmachine: (addons-321835) 
	I0116 02:00:41.054353  979198 main.go:141] libmachine: (addons-321835) DBG | domain addons-321835 has defined MAC address 52:54:00:67:30:48 in network default
	I0116 02:00:41.054892  979198 main.go:141] libmachine: (addons-321835) Ensuring networks are active...
	I0116 02:00:41.054917  979198 main.go:141] libmachine: (addons-321835) DBG | domain addons-321835 has defined MAC address 52:54:00:8e:69:ea in network mk-addons-321835
	I0116 02:00:41.055569  979198 main.go:141] libmachine: (addons-321835) Ensuring network default is active
	I0116 02:00:41.055869  979198 main.go:141] libmachine: (addons-321835) Ensuring network mk-addons-321835 is active
	I0116 02:00:41.056444  979198 main.go:141] libmachine: (addons-321835) Getting domain xml...
	I0116 02:00:41.057185  979198 main.go:141] libmachine: (addons-321835) Creating domain...
	I0116 02:00:42.451847  979198 main.go:141] libmachine: (addons-321835) Waiting to get IP...
	I0116 02:00:42.452689  979198 main.go:141] libmachine: (addons-321835) DBG | domain addons-321835 has defined MAC address 52:54:00:8e:69:ea in network mk-addons-321835
	I0116 02:00:42.453102  979198 main.go:141] libmachine: (addons-321835) DBG | unable to find current IP address of domain addons-321835 in network mk-addons-321835
	I0116 02:00:42.453137  979198 main.go:141] libmachine: (addons-321835) DBG | I0116 02:00:42.453088  979220 retry.go:31] will retry after 199.013974ms: waiting for machine to come up
	I0116 02:00:42.653667  979198 main.go:141] libmachine: (addons-321835) DBG | domain addons-321835 has defined MAC address 52:54:00:8e:69:ea in network mk-addons-321835
	I0116 02:00:42.654237  979198 main.go:141] libmachine: (addons-321835) DBG | unable to find current IP address of domain addons-321835 in network mk-addons-321835
	I0116 02:00:42.654262  979198 main.go:141] libmachine: (addons-321835) DBG | I0116 02:00:42.654210  979220 retry.go:31] will retry after 338.651489ms: waiting for machine to come up
	I0116 02:00:42.996342  979198 main.go:141] libmachine: (addons-321835) DBG | domain addons-321835 has defined MAC address 52:54:00:8e:69:ea in network mk-addons-321835
	I0116 02:00:42.996872  979198 main.go:141] libmachine: (addons-321835) DBG | unable to find current IP address of domain addons-321835 in network mk-addons-321835
	I0116 02:00:42.996897  979198 main.go:141] libmachine: (addons-321835) DBG | I0116 02:00:42.996805  979220 retry.go:31] will retry after 391.709955ms: waiting for machine to come up
	I0116 02:00:43.390497  979198 main.go:141] libmachine: (addons-321835) DBG | domain addons-321835 has defined MAC address 52:54:00:8e:69:ea in network mk-addons-321835
	I0116 02:00:43.390894  979198 main.go:141] libmachine: (addons-321835) DBG | unable to find current IP address of domain addons-321835 in network mk-addons-321835
	I0116 02:00:43.390968  979198 main.go:141] libmachine: (addons-321835) DBG | I0116 02:00:43.390872  979220 retry.go:31] will retry after 537.51527ms: waiting for machine to come up
	I0116 02:00:43.929704  979198 main.go:141] libmachine: (addons-321835) DBG | domain addons-321835 has defined MAC address 52:54:00:8e:69:ea in network mk-addons-321835
	I0116 02:00:43.930198  979198 main.go:141] libmachine: (addons-321835) DBG | unable to find current IP address of domain addons-321835 in network mk-addons-321835
	I0116 02:00:43.930233  979198 main.go:141] libmachine: (addons-321835) DBG | I0116 02:00:43.930116  979220 retry.go:31] will retry after 762.456192ms: waiting for machine to come up
	I0116 02:00:44.694159  979198 main.go:141] libmachine: (addons-321835) DBG | domain addons-321835 has defined MAC address 52:54:00:8e:69:ea in network mk-addons-321835
	I0116 02:00:44.694593  979198 main.go:141] libmachine: (addons-321835) DBG | unable to find current IP address of domain addons-321835 in network mk-addons-321835
	I0116 02:00:44.694627  979198 main.go:141] libmachine: (addons-321835) DBG | I0116 02:00:44.694556  979220 retry.go:31] will retry after 768.21417ms: waiting for machine to come up
	I0116 02:00:45.464545  979198 main.go:141] libmachine: (addons-321835) DBG | domain addons-321835 has defined MAC address 52:54:00:8e:69:ea in network mk-addons-321835
	I0116 02:00:45.465133  979198 main.go:141] libmachine: (addons-321835) DBG | unable to find current IP address of domain addons-321835 in network mk-addons-321835
	I0116 02:00:45.465180  979198 main.go:141] libmachine: (addons-321835) DBG | I0116 02:00:45.465076  979220 retry.go:31] will retry after 763.698229ms: waiting for machine to come up
	I0116 02:00:46.230422  979198 main.go:141] libmachine: (addons-321835) DBG | domain addons-321835 has defined MAC address 52:54:00:8e:69:ea in network mk-addons-321835
	I0116 02:00:46.230950  979198 main.go:141] libmachine: (addons-321835) DBG | unable to find current IP address of domain addons-321835 in network mk-addons-321835
	I0116 02:00:46.230983  979198 main.go:141] libmachine: (addons-321835) DBG | I0116 02:00:46.230880  979220 retry.go:31] will retry after 1.011258145s: waiting for machine to come up
	I0116 02:00:47.244084  979198 main.go:141] libmachine: (addons-321835) DBG | domain addons-321835 has defined MAC address 52:54:00:8e:69:ea in network mk-addons-321835
	I0116 02:00:47.244693  979198 main.go:141] libmachine: (addons-321835) DBG | unable to find current IP address of domain addons-321835 in network mk-addons-321835
	I0116 02:00:47.244729  979198 main.go:141] libmachine: (addons-321835) DBG | I0116 02:00:47.244621  979220 retry.go:31] will retry after 1.831685017s: waiting for machine to come up
	I0116 02:00:49.078944  979198 main.go:141] libmachine: (addons-321835) DBG | domain addons-321835 has defined MAC address 52:54:00:8e:69:ea in network mk-addons-321835
	I0116 02:00:49.079272  979198 main.go:141] libmachine: (addons-321835) DBG | unable to find current IP address of domain addons-321835 in network mk-addons-321835
	I0116 02:00:49.079325  979198 main.go:141] libmachine: (addons-321835) DBG | I0116 02:00:49.079218  979220 retry.go:31] will retry after 2.15010501s: waiting for machine to come up
	I0116 02:00:51.230504  979198 main.go:141] libmachine: (addons-321835) DBG | domain addons-321835 has defined MAC address 52:54:00:8e:69:ea in network mk-addons-321835
	I0116 02:00:51.231087  979198 main.go:141] libmachine: (addons-321835) DBG | unable to find current IP address of domain addons-321835 in network mk-addons-321835
	I0116 02:00:51.231121  979198 main.go:141] libmachine: (addons-321835) DBG | I0116 02:00:51.231032  979220 retry.go:31] will retry after 1.870639599s: waiting for machine to come up
	I0116 02:00:53.104433  979198 main.go:141] libmachine: (addons-321835) DBG | domain addons-321835 has defined MAC address 52:54:00:8e:69:ea in network mk-addons-321835
	I0116 02:00:53.104941  979198 main.go:141] libmachine: (addons-321835) DBG | unable to find current IP address of domain addons-321835 in network mk-addons-321835
	I0116 02:00:53.104972  979198 main.go:141] libmachine: (addons-321835) DBG | I0116 02:00:53.104894  979220 retry.go:31] will retry after 3.605387542s: waiting for machine to come up
	I0116 02:00:56.712238  979198 main.go:141] libmachine: (addons-321835) DBG | domain addons-321835 has defined MAC address 52:54:00:8e:69:ea in network mk-addons-321835
	I0116 02:00:56.712580  979198 main.go:141] libmachine: (addons-321835) DBG | unable to find current IP address of domain addons-321835 in network mk-addons-321835
	I0116 02:00:56.712611  979198 main.go:141] libmachine: (addons-321835) DBG | I0116 02:00:56.712525  979220 retry.go:31] will retry after 3.694791947s: waiting for machine to come up
	I0116 02:01:00.408562  979198 main.go:141] libmachine: (addons-321835) DBG | domain addons-321835 has defined MAC address 52:54:00:8e:69:ea in network mk-addons-321835
	I0116 02:01:00.408994  979198 main.go:141] libmachine: (addons-321835) DBG | unable to find current IP address of domain addons-321835 in network mk-addons-321835
	I0116 02:01:00.409020  979198 main.go:141] libmachine: (addons-321835) DBG | I0116 02:01:00.408949  979220 retry.go:31] will retry after 4.015719534s: waiting for machine to come up
	I0116 02:01:04.429260  979198 main.go:141] libmachine: (addons-321835) DBG | domain addons-321835 has defined MAC address 52:54:00:8e:69:ea in network mk-addons-321835
	I0116 02:01:04.429715  979198 main.go:141] libmachine: (addons-321835) Found IP for machine: 192.168.39.11
	I0116 02:01:04.429747  979198 main.go:141] libmachine: (addons-321835) DBG | domain addons-321835 has current primary IP address 192.168.39.11 and MAC address 52:54:00:8e:69:ea in network mk-addons-321835
	I0116 02:01:04.429757  979198 main.go:141] libmachine: (addons-321835) Reserving static IP address...
	I0116 02:01:04.430209  979198 main.go:141] libmachine: (addons-321835) DBG | unable to find host DHCP lease matching {name: "addons-321835", mac: "52:54:00:8e:69:ea", ip: "192.168.39.11"} in network mk-addons-321835
	I0116 02:01:04.550343  979198 main.go:141] libmachine: (addons-321835) DBG | Getting to WaitForSSH function...
	I0116 02:01:04.550379  979198 main.go:141] libmachine: (addons-321835) Reserved static IP address: 192.168.39.11
	I0116 02:01:04.550441  979198 main.go:141] libmachine: (addons-321835) Waiting for SSH to be available...
	I0116 02:01:04.552970  979198 main.go:141] libmachine: (addons-321835) DBG | domain addons-321835 has defined MAC address 52:54:00:8e:69:ea in network mk-addons-321835
	I0116 02:01:04.553393  979198 main.go:141] libmachine: (addons-321835) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:69:ea", ip: ""} in network mk-addons-321835: {Iface:virbr1 ExpiryTime:2024-01-16 03:00:56 +0000 UTC Type:0 Mac:52:54:00:8e:69:ea Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:minikube Clientid:01:52:54:00:8e:69:ea}
	I0116 02:01:04.553436  979198 main.go:141] libmachine: (addons-321835) DBG | domain addons-321835 has defined IP address 192.168.39.11 and MAC address 52:54:00:8e:69:ea in network mk-addons-321835
	I0116 02:01:04.553586  979198 main.go:141] libmachine: (addons-321835) DBG | Using SSH client type: external
	I0116 02:01:04.553643  979198 main.go:141] libmachine: (addons-321835) DBG | Using SSH private key: /home/jenkins/minikube-integration/17967-971255/.minikube/machines/addons-321835/id_rsa (-rw-------)
	I0116 02:01:04.553687  979198 main.go:141] libmachine: (addons-321835) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.11 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17967-971255/.minikube/machines/addons-321835/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0116 02:01:04.553708  979198 main.go:141] libmachine: (addons-321835) DBG | About to run SSH command:
	I0116 02:01:04.553741  979198 main.go:141] libmachine: (addons-321835) DBG | exit 0
	I0116 02:01:04.658120  979198 main.go:141] libmachine: (addons-321835) DBG | SSH cmd err, output: <nil>: 
	I0116 02:01:04.658369  979198 main.go:141] libmachine: (addons-321835) KVM machine creation complete!
	I0116 02:01:04.658696  979198 main.go:141] libmachine: (addons-321835) Calling .GetConfigRaw
	I0116 02:01:04.674233  979198 main.go:141] libmachine: (addons-321835) Calling .DriverName
	I0116 02:01:04.674624  979198 main.go:141] libmachine: (addons-321835) Calling .DriverName
	I0116 02:01:04.674827  979198 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0116 02:01:04.674849  979198 main.go:141] libmachine: (addons-321835) Calling .GetState
	I0116 02:01:04.676420  979198 main.go:141] libmachine: Detecting operating system of created instance...
	I0116 02:01:04.676440  979198 main.go:141] libmachine: Waiting for SSH to be available...
	I0116 02:01:04.676449  979198 main.go:141] libmachine: Getting to WaitForSSH function...
	I0116 02:01:04.676460  979198 main.go:141] libmachine: (addons-321835) Calling .GetSSHHostname
	I0116 02:01:04.679057  979198 main.go:141] libmachine: (addons-321835) DBG | domain addons-321835 has defined MAC address 52:54:00:8e:69:ea in network mk-addons-321835
	I0116 02:01:04.705379  979198 main.go:141] libmachine: (addons-321835) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:69:ea", ip: ""} in network mk-addons-321835: {Iface:virbr1 ExpiryTime:2024-01-16 03:00:56 +0000 UTC Type:0 Mac:52:54:00:8e:69:ea Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-321835 Clientid:01:52:54:00:8e:69:ea}
	I0116 02:01:04.705435  979198 main.go:141] libmachine: (addons-321835) DBG | domain addons-321835 has defined IP address 192.168.39.11 and MAC address 52:54:00:8e:69:ea in network mk-addons-321835
	I0116 02:01:04.705756  979198 main.go:141] libmachine: (addons-321835) Calling .GetSSHPort
	I0116 02:01:04.706081  979198 main.go:141] libmachine: (addons-321835) Calling .GetSSHKeyPath
	I0116 02:01:04.706329  979198 main.go:141] libmachine: (addons-321835) Calling .GetSSHKeyPath
	I0116 02:01:04.706492  979198 main.go:141] libmachine: (addons-321835) Calling .GetSSHUsername
	I0116 02:01:04.706693  979198 main.go:141] libmachine: Using SSH client type: native
	I0116 02:01:04.707172  979198 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.11 22 <nil> <nil>}
	I0116 02:01:04.707190  979198 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0116 02:01:04.841166  979198 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0116 02:01:04.841199  979198 main.go:141] libmachine: Detecting the provisioner...
	I0116 02:01:04.841213  979198 main.go:141] libmachine: (addons-321835) Calling .GetSSHHostname
	I0116 02:01:04.844047  979198 main.go:141] libmachine: (addons-321835) DBG | domain addons-321835 has defined MAC address 52:54:00:8e:69:ea in network mk-addons-321835
	I0116 02:01:04.844451  979198 main.go:141] libmachine: (addons-321835) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:69:ea", ip: ""} in network mk-addons-321835: {Iface:virbr1 ExpiryTime:2024-01-16 03:00:56 +0000 UTC Type:0 Mac:52:54:00:8e:69:ea Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-321835 Clientid:01:52:54:00:8e:69:ea}
	I0116 02:01:04.844489  979198 main.go:141] libmachine: (addons-321835) DBG | domain addons-321835 has defined IP address 192.168.39.11 and MAC address 52:54:00:8e:69:ea in network mk-addons-321835
	I0116 02:01:04.844667  979198 main.go:141] libmachine: (addons-321835) Calling .GetSSHPort
	I0116 02:01:04.844906  979198 main.go:141] libmachine: (addons-321835) Calling .GetSSHKeyPath
	I0116 02:01:04.845106  979198 main.go:141] libmachine: (addons-321835) Calling .GetSSHKeyPath
	I0116 02:01:04.845336  979198 main.go:141] libmachine: (addons-321835) Calling .GetSSHUsername
	I0116 02:01:04.845554  979198 main.go:141] libmachine: Using SSH client type: native
	I0116 02:01:04.846072  979198 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.11 22 <nil> <nil>}
	I0116 02:01:04.846093  979198 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0116 02:01:04.978820  979198 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g19d536a-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I0116 02:01:04.978980  979198 main.go:141] libmachine: found compatible host: buildroot
	I0116 02:01:04.978995  979198 main.go:141] libmachine: Provisioning with buildroot...
	I0116 02:01:04.979008  979198 main.go:141] libmachine: (addons-321835) Calling .GetMachineName
	I0116 02:01:04.979307  979198 buildroot.go:166] provisioning hostname "addons-321835"
	I0116 02:01:04.979342  979198 main.go:141] libmachine: (addons-321835) Calling .GetMachineName
	I0116 02:01:04.979545  979198 main.go:141] libmachine: (addons-321835) Calling .GetSSHHostname
	I0116 02:01:04.982134  979198 main.go:141] libmachine: (addons-321835) DBG | domain addons-321835 has defined MAC address 52:54:00:8e:69:ea in network mk-addons-321835
	I0116 02:01:04.982504  979198 main.go:141] libmachine: (addons-321835) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:69:ea", ip: ""} in network mk-addons-321835: {Iface:virbr1 ExpiryTime:2024-01-16 03:00:56 +0000 UTC Type:0 Mac:52:54:00:8e:69:ea Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-321835 Clientid:01:52:54:00:8e:69:ea}
	I0116 02:01:04.982536  979198 main.go:141] libmachine: (addons-321835) DBG | domain addons-321835 has defined IP address 192.168.39.11 and MAC address 52:54:00:8e:69:ea in network mk-addons-321835
	I0116 02:01:04.982729  979198 main.go:141] libmachine: (addons-321835) Calling .GetSSHPort
	I0116 02:01:04.982954  979198 main.go:141] libmachine: (addons-321835) Calling .GetSSHKeyPath
	I0116 02:01:04.983111  979198 main.go:141] libmachine: (addons-321835) Calling .GetSSHKeyPath
	I0116 02:01:04.983244  979198 main.go:141] libmachine: (addons-321835) Calling .GetSSHUsername
	I0116 02:01:04.983412  979198 main.go:141] libmachine: Using SSH client type: native
	I0116 02:01:04.983883  979198 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.11 22 <nil> <nil>}
	I0116 02:01:04.983903  979198 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-321835 && echo "addons-321835" | sudo tee /etc/hostname
	I0116 02:01:05.131994  979198 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-321835
	
	I0116 02:01:05.132045  979198 main.go:141] libmachine: (addons-321835) Calling .GetSSHHostname
	I0116 02:01:05.134975  979198 main.go:141] libmachine: (addons-321835) DBG | domain addons-321835 has defined MAC address 52:54:00:8e:69:ea in network mk-addons-321835
	I0116 02:01:05.135354  979198 main.go:141] libmachine: (addons-321835) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:69:ea", ip: ""} in network mk-addons-321835: {Iface:virbr1 ExpiryTime:2024-01-16 03:00:56 +0000 UTC Type:0 Mac:52:54:00:8e:69:ea Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-321835 Clientid:01:52:54:00:8e:69:ea}
	I0116 02:01:05.135380  979198 main.go:141] libmachine: (addons-321835) DBG | domain addons-321835 has defined IP address 192.168.39.11 and MAC address 52:54:00:8e:69:ea in network mk-addons-321835
	I0116 02:01:05.135537  979198 main.go:141] libmachine: (addons-321835) Calling .GetSSHPort
	I0116 02:01:05.135805  979198 main.go:141] libmachine: (addons-321835) Calling .GetSSHKeyPath
	I0116 02:01:05.136020  979198 main.go:141] libmachine: (addons-321835) Calling .GetSSHKeyPath
	I0116 02:01:05.136229  979198 main.go:141] libmachine: (addons-321835) Calling .GetSSHUsername
	I0116 02:01:05.136436  979198 main.go:141] libmachine: Using SSH client type: native
	I0116 02:01:05.136762  979198 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.11 22 <nil> <nil>}
	I0116 02:01:05.136780  979198 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-321835' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-321835/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-321835' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0116 02:01:05.278252  979198 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0116 02:01:05.294001  979198 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17967-971255/.minikube CaCertPath:/home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17967-971255/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17967-971255/.minikube}
	I0116 02:01:05.294058  979198 buildroot.go:174] setting up certificates
	I0116 02:01:05.294070  979198 provision.go:83] configureAuth start
	I0116 02:01:05.294093  979198 main.go:141] libmachine: (addons-321835) Calling .GetMachineName
	I0116 02:01:05.294487  979198 main.go:141] libmachine: (addons-321835) Calling .GetIP
	I0116 02:01:05.297484  979198 main.go:141] libmachine: (addons-321835) DBG | domain addons-321835 has defined MAC address 52:54:00:8e:69:ea in network mk-addons-321835
	I0116 02:01:05.297833  979198 main.go:141] libmachine: (addons-321835) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:69:ea", ip: ""} in network mk-addons-321835: {Iface:virbr1 ExpiryTime:2024-01-16 03:00:56 +0000 UTC Type:0 Mac:52:54:00:8e:69:ea Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-321835 Clientid:01:52:54:00:8e:69:ea}
	I0116 02:01:05.297866  979198 main.go:141] libmachine: (addons-321835) DBG | domain addons-321835 has defined IP address 192.168.39.11 and MAC address 52:54:00:8e:69:ea in network mk-addons-321835
	I0116 02:01:05.298134  979198 main.go:141] libmachine: (addons-321835) Calling .GetSSHHostname
	I0116 02:01:05.300075  979198 main.go:141] libmachine: (addons-321835) DBG | domain addons-321835 has defined MAC address 52:54:00:8e:69:ea in network mk-addons-321835
	I0116 02:01:05.300447  979198 main.go:141] libmachine: (addons-321835) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:69:ea", ip: ""} in network mk-addons-321835: {Iface:virbr1 ExpiryTime:2024-01-16 03:00:56 +0000 UTC Type:0 Mac:52:54:00:8e:69:ea Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-321835 Clientid:01:52:54:00:8e:69:ea}
	I0116 02:01:05.300471  979198 main.go:141] libmachine: (addons-321835) DBG | domain addons-321835 has defined IP address 192.168.39.11 and MAC address 52:54:00:8e:69:ea in network mk-addons-321835
	I0116 02:01:05.300623  979198 provision.go:138] copyHostCerts
	I0116 02:01:05.300726  979198 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17967-971255/.minikube/ca.pem (1082 bytes)
	I0116 02:01:05.300929  979198 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17967-971255/.minikube/cert.pem (1123 bytes)
	I0116 02:01:05.301056  979198 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17967-971255/.minikube/key.pem (1675 bytes)
	I0116 02:01:05.301145  979198 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17967-971255/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca-key.pem org=jenkins.addons-321835 san=[192.168.39.11 192.168.39.11 localhost 127.0.0.1 minikube addons-321835]
	I0116 02:01:05.478043  979198 provision.go:172] copyRemoteCerts
	I0116 02:01:05.478113  979198 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0116 02:01:05.478140  979198 main.go:141] libmachine: (addons-321835) Calling .GetSSHHostname
	I0116 02:01:05.481636  979198 main.go:141] libmachine: (addons-321835) DBG | domain addons-321835 has defined MAC address 52:54:00:8e:69:ea in network mk-addons-321835
	I0116 02:01:05.482052  979198 main.go:141] libmachine: (addons-321835) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:69:ea", ip: ""} in network mk-addons-321835: {Iface:virbr1 ExpiryTime:2024-01-16 03:00:56 +0000 UTC Type:0 Mac:52:54:00:8e:69:ea Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-321835 Clientid:01:52:54:00:8e:69:ea}
	I0116 02:01:05.482087  979198 main.go:141] libmachine: (addons-321835) DBG | domain addons-321835 has defined IP address 192.168.39.11 and MAC address 52:54:00:8e:69:ea in network mk-addons-321835
	I0116 02:01:05.482319  979198 main.go:141] libmachine: (addons-321835) Calling .GetSSHPort
	I0116 02:01:05.482566  979198 main.go:141] libmachine: (addons-321835) Calling .GetSSHKeyPath
	I0116 02:01:05.482737  979198 main.go:141] libmachine: (addons-321835) Calling .GetSSHUsername
	I0116 02:01:05.482870  979198 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/addons-321835/id_rsa Username:docker}
	I0116 02:01:05.579928  979198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0116 02:01:05.603710  979198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0116 02:01:05.626392  979198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0116 02:01:05.650405  979198 provision.go:86] duration metric: configureAuth took 356.293527ms
	I0116 02:01:05.650441  979198 buildroot.go:189] setting minikube options for container-runtime
	I0116 02:01:05.650686  979198 config.go:182] Loaded profile config "addons-321835": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 02:01:05.650783  979198 main.go:141] libmachine: (addons-321835) Calling .GetSSHHostname
	I0116 02:01:05.653882  979198 main.go:141] libmachine: (addons-321835) DBG | domain addons-321835 has defined MAC address 52:54:00:8e:69:ea in network mk-addons-321835
	I0116 02:01:05.654251  979198 main.go:141] libmachine: (addons-321835) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:69:ea", ip: ""} in network mk-addons-321835: {Iface:virbr1 ExpiryTime:2024-01-16 03:00:56 +0000 UTC Type:0 Mac:52:54:00:8e:69:ea Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-321835 Clientid:01:52:54:00:8e:69:ea}
	I0116 02:01:05.654281  979198 main.go:141] libmachine: (addons-321835) DBG | domain addons-321835 has defined IP address 192.168.39.11 and MAC address 52:54:00:8e:69:ea in network mk-addons-321835
	I0116 02:01:05.654487  979198 main.go:141] libmachine: (addons-321835) Calling .GetSSHPort
	I0116 02:01:05.654730  979198 main.go:141] libmachine: (addons-321835) Calling .GetSSHKeyPath
	I0116 02:01:05.654901  979198 main.go:141] libmachine: (addons-321835) Calling .GetSSHKeyPath
	I0116 02:01:05.655023  979198 main.go:141] libmachine: (addons-321835) Calling .GetSSHUsername
	I0116 02:01:05.655173  979198 main.go:141] libmachine: Using SSH client type: native
	I0116 02:01:05.655552  979198 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.11 22 <nil> <nil>}
	I0116 02:01:05.655578  979198 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0116 02:01:06.228221  979198 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0116 02:01:06.228273  979198 main.go:141] libmachine: Checking connection to Docker...
	I0116 02:01:06.228302  979198 main.go:141] libmachine: (addons-321835) Calling .GetURL
	I0116 02:01:06.229713  979198 main.go:141] libmachine: (addons-321835) DBG | Using libvirt version 6000000
	I0116 02:01:06.231861  979198 main.go:141] libmachine: (addons-321835) DBG | domain addons-321835 has defined MAC address 52:54:00:8e:69:ea in network mk-addons-321835
	I0116 02:01:06.232372  979198 main.go:141] libmachine: (addons-321835) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:69:ea", ip: ""} in network mk-addons-321835: {Iface:virbr1 ExpiryTime:2024-01-16 03:00:56 +0000 UTC Type:0 Mac:52:54:00:8e:69:ea Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-321835 Clientid:01:52:54:00:8e:69:ea}
	I0116 02:01:06.232407  979198 main.go:141] libmachine: (addons-321835) DBG | domain addons-321835 has defined IP address 192.168.39.11 and MAC address 52:54:00:8e:69:ea in network mk-addons-321835
	I0116 02:01:06.232705  979198 main.go:141] libmachine: Docker is up and running!
	I0116 02:01:06.232721  979198 main.go:141] libmachine: Reticulating splines...
	I0116 02:01:06.232729  979198 client.go:171] LocalClient.Create took 25.886549989s
	I0116 02:01:06.232754  979198 start.go:167] duration metric: libmachine.API.Create for "addons-321835" took 25.886627138s
	I0116 02:01:06.232769  979198 start.go:300] post-start starting for "addons-321835" (driver="kvm2")
	I0116 02:01:06.232781  979198 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0116 02:01:06.232799  979198 main.go:141] libmachine: (addons-321835) Calling .DriverName
	I0116 02:01:06.233091  979198 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0116 02:01:06.233123  979198 main.go:141] libmachine: (addons-321835) Calling .GetSSHHostname
	I0116 02:01:06.235785  979198 main.go:141] libmachine: (addons-321835) DBG | domain addons-321835 has defined MAC address 52:54:00:8e:69:ea in network mk-addons-321835
	I0116 02:01:06.236128  979198 main.go:141] libmachine: (addons-321835) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:69:ea", ip: ""} in network mk-addons-321835: {Iface:virbr1 ExpiryTime:2024-01-16 03:00:56 +0000 UTC Type:0 Mac:52:54:00:8e:69:ea Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-321835 Clientid:01:52:54:00:8e:69:ea}
	I0116 02:01:06.236175  979198 main.go:141] libmachine: (addons-321835) DBG | domain addons-321835 has defined IP address 192.168.39.11 and MAC address 52:54:00:8e:69:ea in network mk-addons-321835
	I0116 02:01:06.236316  979198 main.go:141] libmachine: (addons-321835) Calling .GetSSHPort
	I0116 02:01:06.236614  979198 main.go:141] libmachine: (addons-321835) Calling .GetSSHKeyPath
	I0116 02:01:06.236785  979198 main.go:141] libmachine: (addons-321835) Calling .GetSSHUsername
	I0116 02:01:06.236983  979198 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/addons-321835/id_rsa Username:docker}
	I0116 02:01:06.331314  979198 ssh_runner.go:195] Run: cat /etc/os-release
	I0116 02:01:06.335612  979198 info.go:137] Remote host: Buildroot 2021.02.12
	I0116 02:01:06.335644  979198 filesync.go:126] Scanning /home/jenkins/minikube-integration/17967-971255/.minikube/addons for local assets ...
	I0116 02:01:06.335718  979198 filesync.go:126] Scanning /home/jenkins/minikube-integration/17967-971255/.minikube/files for local assets ...
	I0116 02:01:06.335746  979198 start.go:303] post-start completed in 102.97085ms
	I0116 02:01:06.335789  979198 main.go:141] libmachine: (addons-321835) Calling .GetConfigRaw
	I0116 02:01:06.336442  979198 main.go:141] libmachine: (addons-321835) Calling .GetIP
	I0116 02:01:06.338915  979198 main.go:141] libmachine: (addons-321835) DBG | domain addons-321835 has defined MAC address 52:54:00:8e:69:ea in network mk-addons-321835
	I0116 02:01:06.339288  979198 main.go:141] libmachine: (addons-321835) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:69:ea", ip: ""} in network mk-addons-321835: {Iface:virbr1 ExpiryTime:2024-01-16 03:00:56 +0000 UTC Type:0 Mac:52:54:00:8e:69:ea Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-321835 Clientid:01:52:54:00:8e:69:ea}
	I0116 02:01:06.339317  979198 main.go:141] libmachine: (addons-321835) DBG | domain addons-321835 has defined IP address 192.168.39.11 and MAC address 52:54:00:8e:69:ea in network mk-addons-321835
	I0116 02:01:06.339583  979198 profile.go:148] Saving config to /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/addons-321835/config.json ...
	I0116 02:01:06.339785  979198 start.go:128] duration metric: createHost completed in 26.013503139s
	I0116 02:01:06.339854  979198 main.go:141] libmachine: (addons-321835) Calling .GetSSHHostname
	I0116 02:01:06.342032  979198 main.go:141] libmachine: (addons-321835) DBG | domain addons-321835 has defined MAC address 52:54:00:8e:69:ea in network mk-addons-321835
	I0116 02:01:06.342329  979198 main.go:141] libmachine: (addons-321835) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:69:ea", ip: ""} in network mk-addons-321835: {Iface:virbr1 ExpiryTime:2024-01-16 03:00:56 +0000 UTC Type:0 Mac:52:54:00:8e:69:ea Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-321835 Clientid:01:52:54:00:8e:69:ea}
	I0116 02:01:06.342357  979198 main.go:141] libmachine: (addons-321835) DBG | domain addons-321835 has defined IP address 192.168.39.11 and MAC address 52:54:00:8e:69:ea in network mk-addons-321835
	I0116 02:01:06.342504  979198 main.go:141] libmachine: (addons-321835) Calling .GetSSHPort
	I0116 02:01:06.342710  979198 main.go:141] libmachine: (addons-321835) Calling .GetSSHKeyPath
	I0116 02:01:06.342877  979198 main.go:141] libmachine: (addons-321835) Calling .GetSSHKeyPath
	I0116 02:01:06.343020  979198 main.go:141] libmachine: (addons-321835) Calling .GetSSHUsername
	I0116 02:01:06.343221  979198 main.go:141] libmachine: Using SSH client type: native
	I0116 02:01:06.343553  979198 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.11 22 <nil> <nil>}
	I0116 02:01:06.343565  979198 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0116 02:01:06.474434  979198 main.go:141] libmachine: SSH cmd err, output: <nil>: 1705370466.458065870
	
	I0116 02:01:06.474462  979198 fix.go:206] guest clock: 1705370466.458065870
	I0116 02:01:06.474469  979198 fix.go:219] Guest: 2024-01-16 02:01:06.45806587 +0000 UTC Remote: 2024-01-16 02:01:06.339799009 +0000 UTC m=+26.142768511 (delta=118.266861ms)
	I0116 02:01:06.474515  979198 fix.go:190] guest clock delta is within tolerance: 118.266861ms
	I0116 02:01:06.474536  979198 start.go:83] releasing machines lock for "addons-321835", held for 26.148359924s
	I0116 02:01:06.474557  979198 main.go:141] libmachine: (addons-321835) Calling .DriverName
	I0116 02:01:06.474885  979198 main.go:141] libmachine: (addons-321835) Calling .GetIP
	I0116 02:01:06.477571  979198 main.go:141] libmachine: (addons-321835) DBG | domain addons-321835 has defined MAC address 52:54:00:8e:69:ea in network mk-addons-321835
	I0116 02:01:06.477910  979198 main.go:141] libmachine: (addons-321835) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:69:ea", ip: ""} in network mk-addons-321835: {Iface:virbr1 ExpiryTime:2024-01-16 03:00:56 +0000 UTC Type:0 Mac:52:54:00:8e:69:ea Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-321835 Clientid:01:52:54:00:8e:69:ea}
	I0116 02:01:06.477933  979198 main.go:141] libmachine: (addons-321835) DBG | domain addons-321835 has defined IP address 192.168.39.11 and MAC address 52:54:00:8e:69:ea in network mk-addons-321835
	I0116 02:01:06.478132  979198 main.go:141] libmachine: (addons-321835) Calling .DriverName
	I0116 02:01:06.478661  979198 main.go:141] libmachine: (addons-321835) Calling .DriverName
	I0116 02:01:06.478875  979198 main.go:141] libmachine: (addons-321835) Calling .DriverName
	I0116 02:01:06.479012  979198 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0116 02:01:06.479060  979198 main.go:141] libmachine: (addons-321835) Calling .GetSSHHostname
	I0116 02:01:06.479136  979198 ssh_runner.go:195] Run: cat /version.json
	I0116 02:01:06.479166  979198 main.go:141] libmachine: (addons-321835) Calling .GetSSHHostname
	I0116 02:01:06.481671  979198 main.go:141] libmachine: (addons-321835) DBG | domain addons-321835 has defined MAC address 52:54:00:8e:69:ea in network mk-addons-321835
	I0116 02:01:06.481880  979198 main.go:141] libmachine: (addons-321835) DBG | domain addons-321835 has defined MAC address 52:54:00:8e:69:ea in network mk-addons-321835
	I0116 02:01:06.482073  979198 main.go:141] libmachine: (addons-321835) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:69:ea", ip: ""} in network mk-addons-321835: {Iface:virbr1 ExpiryTime:2024-01-16 03:00:56 +0000 UTC Type:0 Mac:52:54:00:8e:69:ea Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-321835 Clientid:01:52:54:00:8e:69:ea}
	I0116 02:01:06.482100  979198 main.go:141] libmachine: (addons-321835) DBG | domain addons-321835 has defined IP address 192.168.39.11 and MAC address 52:54:00:8e:69:ea in network mk-addons-321835
	I0116 02:01:06.482217  979198 main.go:141] libmachine: (addons-321835) Calling .GetSSHPort
	I0116 02:01:06.482356  979198 main.go:141] libmachine: (addons-321835) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:69:ea", ip: ""} in network mk-addons-321835: {Iface:virbr1 ExpiryTime:2024-01-16 03:00:56 +0000 UTC Type:0 Mac:52:54:00:8e:69:ea Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-321835 Clientid:01:52:54:00:8e:69:ea}
	I0116 02:01:06.482378  979198 main.go:141] libmachine: (addons-321835) DBG | domain addons-321835 has defined IP address 192.168.39.11 and MAC address 52:54:00:8e:69:ea in network mk-addons-321835
	I0116 02:01:06.482380  979198 main.go:141] libmachine: (addons-321835) Calling .GetSSHKeyPath
	I0116 02:01:06.482567  979198 main.go:141] libmachine: (addons-321835) Calling .GetSSHPort
	I0116 02:01:06.482610  979198 main.go:141] libmachine: (addons-321835) Calling .GetSSHUsername
	I0116 02:01:06.482729  979198 main.go:141] libmachine: (addons-321835) Calling .GetSSHKeyPath
	I0116 02:01:06.482787  979198 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/addons-321835/id_rsa Username:docker}
	I0116 02:01:06.482865  979198 main.go:141] libmachine: (addons-321835) Calling .GetSSHUsername
	I0116 02:01:06.483015  979198 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/addons-321835/id_rsa Username:docker}
	I0116 02:01:06.575023  979198 ssh_runner.go:195] Run: systemctl --version
	I0116 02:01:06.600788  979198 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0116 02:01:06.766472  979198 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0116 02:01:06.772926  979198 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0116 02:01:06.773044  979198 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0116 02:01:06.788215  979198 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0116 02:01:06.788247  979198 start.go:475] detecting cgroup driver to use...
	I0116 02:01:06.788341  979198 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0116 02:01:06.805248  979198 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0116 02:01:06.818333  979198 docker.go:217] disabling cri-docker service (if available) ...
	I0116 02:01:06.818412  979198 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0116 02:01:06.830753  979198 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0116 02:01:06.843082  979198 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0116 02:01:06.946511  979198 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0116 02:01:07.065472  979198 docker.go:233] disabling docker service ...
	I0116 02:01:07.065549  979198 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0116 02:01:07.079326  979198 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0116 02:01:07.091456  979198 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0116 02:01:07.193298  979198 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0116 02:01:07.295136  979198 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0116 02:01:07.309685  979198 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0116 02:01:07.327054  979198 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0116 02:01:07.327124  979198 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 02:01:07.337293  979198 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0116 02:01:07.337379  979198 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 02:01:07.347679  979198 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 02:01:07.359055  979198 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 02:01:07.370929  979198 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0116 02:01:07.381749  979198 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0116 02:01:07.391166  979198 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0116 02:01:07.391243  979198 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0116 02:01:07.404232  979198 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0116 02:01:07.413556  979198 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0116 02:01:07.517743  979198 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0116 02:01:07.692246  979198 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0116 02:01:07.692367  979198 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0116 02:01:07.697859  979198 start.go:543] Will wait 60s for crictl version
	I0116 02:01:07.697939  979198 ssh_runner.go:195] Run: which crictl
	I0116 02:01:07.702378  979198 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0116 02:01:07.746772  979198 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0116 02:01:07.746875  979198 ssh_runner.go:195] Run: crio --version
	I0116 02:01:07.799380  979198 ssh_runner.go:195] Run: crio --version
	I0116 02:01:07.853295  979198 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0116 02:01:07.854784  979198 main.go:141] libmachine: (addons-321835) Calling .GetIP
	I0116 02:01:07.857259  979198 main.go:141] libmachine: (addons-321835) DBG | domain addons-321835 has defined MAC address 52:54:00:8e:69:ea in network mk-addons-321835
	I0116 02:01:07.857539  979198 main.go:141] libmachine: (addons-321835) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:69:ea", ip: ""} in network mk-addons-321835: {Iface:virbr1 ExpiryTime:2024-01-16 03:00:56 +0000 UTC Type:0 Mac:52:54:00:8e:69:ea Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-321835 Clientid:01:52:54:00:8e:69:ea}
	I0116 02:01:07.857563  979198 main.go:141] libmachine: (addons-321835) DBG | domain addons-321835 has defined IP address 192.168.39.11 and MAC address 52:54:00:8e:69:ea in network mk-addons-321835
	I0116 02:01:07.857819  979198 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0116 02:01:07.861914  979198 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 02:01:07.874794  979198 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0116 02:01:07.874904  979198 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 02:01:07.910689  979198 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0116 02:01:07.910765  979198 ssh_runner.go:195] Run: which lz4
	I0116 02:01:07.914862  979198 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0116 02:01:07.919200  979198 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0116 02:01:07.919245  979198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0116 02:01:09.811452  979198 crio.go:444] Took 1.896648 seconds to copy over tarball
	I0116 02:01:09.811545  979198 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0116 02:01:13.070956  979198 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.259368437s)
	I0116 02:01:13.071011  979198 crio.go:451] Took 3.259533 seconds to extract the tarball
	I0116 02:01:13.071022  979198 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0116 02:01:13.115130  979198 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 02:01:13.189493  979198 crio.go:496] all images are preloaded for cri-o runtime.
	I0116 02:01:13.189520  979198 cache_images.go:84] Images are preloaded, skipping loading
	I0116 02:01:13.189583  979198 ssh_runner.go:195] Run: crio config
	I0116 02:01:13.256269  979198 cni.go:84] Creating CNI manager for ""
	I0116 02:01:13.256299  979198 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 02:01:13.256322  979198 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0116 02:01:13.256340  979198 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.11 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-321835 NodeName:addons-321835 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.11"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.11 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0116 02:01:13.256566  979198 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.11
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-321835"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.11
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.11"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0116 02:01:13.256673  979198 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=addons-321835 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.11
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:addons-321835 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0116 02:01:13.256739  979198 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0116 02:01:13.266853  979198 binaries.go:44] Found k8s binaries, skipping transfer
	I0116 02:01:13.266938  979198 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0116 02:01:13.276300  979198 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I0116 02:01:13.292485  979198 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0116 02:01:13.308389  979198 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2097 bytes)
	I0116 02:01:13.326275  979198 ssh_runner.go:195] Run: grep 192.168.39.11	control-plane.minikube.internal$ /etc/hosts
	I0116 02:01:13.330243  979198 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.11	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 02:01:13.342518  979198 certs.go:56] Setting up /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/addons-321835 for IP: 192.168.39.11
	I0116 02:01:13.342560  979198 certs.go:190] acquiring lock for shared ca certs: {Name:mk7c5920652340a030ac1e36e0fe21cc8437c491 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:01:13.342742  979198 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/17967-971255/.minikube/ca.key
	I0116 02:01:13.519145  979198 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17967-971255/.minikube/ca.crt ...
	I0116 02:01:13.519179  979198 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17967-971255/.minikube/ca.crt: {Name:mkcdf06fbdf9305b32152ad973f85189a1d59542 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:01:13.519346  979198 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17967-971255/.minikube/ca.key ...
	I0116 02:01:13.519357  979198 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17967-971255/.minikube/ca.key: {Name:mkdd29ab4ffe4ec00e5fb8a1491f5890c4bfc681 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:01:13.519426  979198 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/17967-971255/.minikube/proxy-client-ca.key
	I0116 02:01:13.667242  979198 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17967-971255/.minikube/proxy-client-ca.crt ...
	I0116 02:01:13.667277  979198 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17967-971255/.minikube/proxy-client-ca.crt: {Name:mk61d28b18197c6c0670a100ea05fa7e8f77b840 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:01:13.667435  979198 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17967-971255/.minikube/proxy-client-ca.key ...
	I0116 02:01:13.667447  979198 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17967-971255/.minikube/proxy-client-ca.key: {Name:mka4fde17583921e226e983444719b05b4a23f41 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:01:13.667560  979198 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/addons-321835/client.key
	I0116 02:01:13.667576  979198 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/addons-321835/client.crt with IP's: []
	I0116 02:01:14.070886  979198 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/addons-321835/client.crt ...
	I0116 02:01:14.070931  979198 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/addons-321835/client.crt: {Name:mk38f2799260b3479dc3c2870f3d68029ae8b5bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:01:14.071129  979198 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/addons-321835/client.key ...
	I0116 02:01:14.071141  979198 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/addons-321835/client.key: {Name:mk1a3416635cf35d5d56056798ff1f2c7121b986 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:01:14.071217  979198 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/addons-321835/apiserver.key.f4c704eb
	I0116 02:01:14.071236  979198 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/addons-321835/apiserver.crt.f4c704eb with IP's: [192.168.39.11 10.96.0.1 127.0.0.1 10.0.0.1]
	I0116 02:01:14.181826  979198 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/addons-321835/apiserver.crt.f4c704eb ...
	I0116 02:01:14.181863  979198 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/addons-321835/apiserver.crt.f4c704eb: {Name:mk2de3e2cf05f07bc16cbf236bd12f1e16bf4137 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:01:14.182033  979198 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/addons-321835/apiserver.key.f4c704eb ...
	I0116 02:01:14.182046  979198 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/addons-321835/apiserver.key.f4c704eb: {Name:mk038539cf0a0568c6aca1ff1a875e0093115562 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:01:14.182110  979198 certs.go:337] copying /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/addons-321835/apiserver.crt.f4c704eb -> /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/addons-321835/apiserver.crt
	I0116 02:01:14.182217  979198 certs.go:341] copying /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/addons-321835/apiserver.key.f4c704eb -> /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/addons-321835/apiserver.key
	I0116 02:01:14.182267  979198 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/addons-321835/proxy-client.key
	I0116 02:01:14.182285  979198 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/addons-321835/proxy-client.crt with IP's: []
	I0116 02:01:14.342646  979198 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/addons-321835/proxy-client.crt ...
	I0116 02:01:14.342683  979198 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/addons-321835/proxy-client.crt: {Name:mk7bda8401d1afc11e29637cc7cfccbf50026fd5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:01:14.342840  979198 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/addons-321835/proxy-client.key ...
	I0116 02:01:14.342852  979198 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/addons-321835/proxy-client.key: {Name:mkb751aadba22bf77b4e476b7719edf5ebce138f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:01:14.343012  979198 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca-key.pem (1675 bytes)
	I0116 02:01:14.343048  979198 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca.pem (1082 bytes)
	I0116 02:01:14.343073  979198 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/home/jenkins/minikube-integration/17967-971255/.minikube/certs/cert.pem (1123 bytes)
	I0116 02:01:14.343097  979198 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/home/jenkins/minikube-integration/17967-971255/.minikube/certs/key.pem (1675 bytes)
	I0116 02:01:14.343751  979198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/addons-321835/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0116 02:01:14.367862  979198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/addons-321835/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0116 02:01:14.390532  979198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/addons-321835/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0116 02:01:14.412619  979198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/addons-321835/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0116 02:01:14.436845  979198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0116 02:01:14.460342  979198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0116 02:01:14.482171  979198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0116 02:01:14.503399  979198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0116 02:01:14.525256  979198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0116 02:01:14.546915  979198 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0116 02:01:14.563388  979198 ssh_runner.go:195] Run: openssl version
	I0116 02:01:14.569203  979198 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0116 02:01:14.579611  979198 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0116 02:01:14.584036  979198 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 16 02:01 /usr/share/ca-certificates/minikubeCA.pem
	I0116 02:01:14.584113  979198 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0116 02:01:14.589870  979198 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0116 02:01:14.600272  979198 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0116 02:01:14.604233  979198 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0116 02:01:14.604295  979198 kubeadm.go:404] StartCluster: {Name:addons-321835 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.28.4 ClusterName:addons-321835 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.11 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 02:01:14.604371  979198 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0116 02:01:14.604420  979198 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 02:01:14.644313  979198 cri.go:89] found id: ""
	I0116 02:01:14.644402  979198 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0116 02:01:14.654152  979198 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0116 02:01:14.663482  979198 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0116 02:01:14.672901  979198 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0116 02:01:14.672957  979198 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0116 02:01:14.727315  979198 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0116 02:01:14.727373  979198 kubeadm.go:322] [preflight] Running pre-flight checks
	I0116 02:01:14.863218  979198 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0116 02:01:14.863373  979198 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0116 02:01:14.863510  979198 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0116 02:01:15.087146  979198 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0116 02:01:15.090603  979198 out.go:204]   - Generating certificates and keys ...
	I0116 02:01:15.090748  979198 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0116 02:01:15.090865  979198 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0116 02:01:15.222409  979198 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0116 02:01:15.395508  979198 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0116 02:01:15.446027  979198 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0116 02:01:15.570857  979198 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0116 02:01:15.692429  979198 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0116 02:01:15.692591  979198 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-321835 localhost] and IPs [192.168.39.11 127.0.0.1 ::1]
	I0116 02:01:15.942280  979198 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0116 02:01:15.942398  979198 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-321835 localhost] and IPs [192.168.39.11 127.0.0.1 ::1]
	I0116 02:01:16.187715  979198 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0116 02:01:16.374995  979198 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0116 02:01:16.437140  979198 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0116 02:01:16.437283  979198 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0116 02:01:16.523975  979198 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0116 02:01:16.998212  979198 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0116 02:01:17.088677  979198 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0116 02:01:17.328019  979198 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0116 02:01:17.328476  979198 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0116 02:01:17.330749  979198 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0116 02:01:17.332805  979198 out.go:204]   - Booting up control plane ...
	I0116 02:01:17.332933  979198 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0116 02:01:17.333058  979198 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0116 02:01:17.333157  979198 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0116 02:01:17.347607  979198 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0116 02:01:17.350576  979198 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0116 02:01:17.350630  979198 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0116 02:01:17.480062  979198 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0116 02:01:25.985480  979198 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.506280 seconds
	I0116 02:01:25.985644  979198 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0116 02:01:26.003775  979198 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0116 02:01:26.536828  979198 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0116 02:01:26.537086  979198 kubeadm.go:322] [mark-control-plane] Marking the node addons-321835 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0116 02:01:27.056588  979198 kubeadm.go:322] [bootstrap-token] Using token: ofqsaw.qzr0b0jvfu2koe6v
	I0116 02:01:27.058146  979198 out.go:204]   - Configuring RBAC rules ...
	I0116 02:01:27.058292  979198 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0116 02:01:27.068740  979198 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0116 02:01:27.083231  979198 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0116 02:01:27.087113  979198 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0116 02:01:27.091839  979198 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0116 02:01:27.095707  979198 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0116 02:01:27.115994  979198 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0116 02:01:27.384818  979198 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0116 02:01:27.475797  979198 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0116 02:01:27.476878  979198 kubeadm.go:322] 
	I0116 02:01:27.476968  979198 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0116 02:01:27.476981  979198 kubeadm.go:322] 
	I0116 02:01:27.477079  979198 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0116 02:01:27.477089  979198 kubeadm.go:322] 
	I0116 02:01:27.477121  979198 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0116 02:01:27.477204  979198 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0116 02:01:27.477289  979198 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0116 02:01:27.477305  979198 kubeadm.go:322] 
	I0116 02:01:27.477377  979198 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0116 02:01:27.477387  979198 kubeadm.go:322] 
	I0116 02:01:27.477472  979198 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0116 02:01:27.477490  979198 kubeadm.go:322] 
	I0116 02:01:27.477554  979198 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0116 02:01:27.477670  979198 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0116 02:01:27.477779  979198 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0116 02:01:27.477791  979198 kubeadm.go:322] 
	I0116 02:01:27.477904  979198 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0116 02:01:27.478028  979198 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0116 02:01:27.478046  979198 kubeadm.go:322] 
	I0116 02:01:27.478144  979198 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token ofqsaw.qzr0b0jvfu2koe6v \
	I0116 02:01:27.478274  979198 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:2b2d63eda0b757db09586d44c0338fc83976b062d99727312f950d3846e4844b \
	I0116 02:01:27.478304  979198 kubeadm.go:322] 	--control-plane 
	I0116 02:01:27.478313  979198 kubeadm.go:322] 
	I0116 02:01:27.478418  979198 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0116 02:01:27.478427  979198 kubeadm.go:322] 
	I0116 02:01:27.478544  979198 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token ofqsaw.qzr0b0jvfu2koe6v \
	I0116 02:01:27.478677  979198 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:2b2d63eda0b757db09586d44c0338fc83976b062d99727312f950d3846e4844b 
	I0116 02:01:27.479216  979198 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0116 02:01:27.479250  979198 cni.go:84] Creating CNI manager for ""
	I0116 02:01:27.479261  979198 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 02:01:27.481393  979198 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0116 02:01:27.483040  979198 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0116 02:01:27.513994  979198 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0116 02:01:27.550031  979198 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0116 02:01:27.550147  979198 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:01:27.550153  979198 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=880ec6d67bbc7fe8882aaa6543d5a07427c973fd minikube.k8s.io/name=addons-321835 minikube.k8s.io/updated_at=2024_01_16T02_01_27_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:01:27.577251  979198 ops.go:34] apiserver oom_adj: -16
	I0116 02:01:27.856287  979198 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:01:28.356908  979198 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:01:28.856890  979198 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:01:29.356443  979198 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:01:29.856338  979198 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:01:30.356851  979198 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:01:30.856575  979198 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:01:31.356328  979198 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:01:31.856627  979198 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:01:32.356656  979198 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:01:32.856527  979198 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:01:33.357267  979198 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:01:33.856378  979198 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:01:34.356865  979198 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:01:34.856737  979198 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:01:35.356798  979198 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:01:35.856704  979198 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:01:36.356376  979198 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:01:36.857171  979198 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:01:37.356998  979198 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:01:37.857314  979198 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:01:38.356549  979198 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:01:38.857160  979198 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:01:38.958588  979198 kubeadm.go:1088] duration metric: took 11.408540531s to wait for elevateKubeSystemPrivileges.
	I0116 02:01:38.958623  979198 kubeadm.go:406] StartCluster complete in 24.354334473s
	I0116 02:01:38.958645  979198 settings.go:142] acquiring lock: {Name:mk39c69fb5454efd5afab021c02c3bec1f1b4e6b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:01:38.958809  979198 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17967-971255/kubeconfig
	I0116 02:01:38.959374  979198 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17967-971255/kubeconfig: {Name:mk5ca3018274b0a03599cf3631785c0629f81a67 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:01:38.959630  979198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0116 02:01:38.959745  979198 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0116 02:01:38.959851  979198 config.go:182] Loaded profile config "addons-321835": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 02:01:38.959871  979198 addons.go:69] Setting helm-tiller=true in profile "addons-321835"
	I0116 02:01:38.959897  979198 addons.go:234] Setting addon helm-tiller=true in "addons-321835"
	I0116 02:01:38.959896  979198 addons.go:69] Setting yakd=true in profile "addons-321835"
	I0116 02:01:38.959894  979198 addons.go:69] Setting cloud-spanner=true in profile "addons-321835"
	I0116 02:01:38.959901  979198 addons.go:69] Setting ingress-dns=true in profile "addons-321835"
	I0116 02:01:38.959918  979198 addons.go:234] Setting addon yakd=true in "addons-321835"
	I0116 02:01:38.959920  979198 addons.go:69] Setting ingress=true in profile "addons-321835"
	I0116 02:01:38.959923  979198 addons.go:234] Setting addon cloud-spanner=true in "addons-321835"
	I0116 02:01:38.959936  979198 addons.go:234] Setting addon ingress-dns=true in "addons-321835"
	I0116 02:01:38.959949  979198 addons.go:69] Setting registry=true in profile "addons-321835"
	I0116 02:01:38.959852  979198 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-321835"
	I0116 02:01:38.959969  979198 addons.go:69] Setting inspektor-gadget=true in profile "addons-321835"
	I0116 02:01:38.959972  979198 host.go:66] Checking if "addons-321835" exists ...
	I0116 02:01:38.959964  979198 addons.go:69] Setting default-storageclass=true in profile "addons-321835"
	I0116 02:01:38.959979  979198 host.go:66] Checking if "addons-321835" exists ...
	I0116 02:01:38.959987  979198 addons.go:69] Setting metrics-server=true in profile "addons-321835"
	I0116 02:01:38.959936  979198 addons.go:234] Setting addon ingress=true in "addons-321835"
	I0116 02:01:38.959993  979198 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-321835"
	I0116 02:01:38.960001  979198 addons.go:234] Setting addon metrics-server=true in "addons-321835"
	I0116 02:01:38.960004  979198 addons.go:69] Setting volumesnapshots=true in profile "addons-321835"
	I0116 02:01:38.960014  979198 addons.go:234] Setting addon volumesnapshots=true in "addons-321835"
	I0116 02:01:38.960017  979198 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-321835"
	I0116 02:01:38.960025  979198 host.go:66] Checking if "addons-321835" exists ...
	I0116 02:01:38.960038  979198 host.go:66] Checking if "addons-321835" exists ...
	I0116 02:01:38.960040  979198 host.go:66] Checking if "addons-321835" exists ...
	I0116 02:01:38.960065  979198 host.go:66] Checking if "addons-321835" exists ...
	I0116 02:01:38.959989  979198 host.go:66] Checking if "addons-321835" exists ...
	I0116 02:01:38.959996  979198 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-321835"
	I0116 02:01:38.960458  979198 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-321835"
	I0116 02:01:38.960455  979198 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 02:01:38.959963  979198 addons.go:234] Setting addon registry=true in "addons-321835"
	I0116 02:01:38.960481  979198 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 02:01:38.960486  979198 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 02:01:38.960498  979198 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:01:38.960513  979198 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:01:38.960527  979198 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:01:38.960542  979198 host.go:66] Checking if "addons-321835" exists ...
	I0116 02:01:38.960572  979198 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 02:01:38.960592  979198 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 02:01:38.960593  979198 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:01:38.960596  979198 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-321835"
	I0116 02:01:38.960608  979198 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-321835"
	I0116 02:01:38.960619  979198 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:01:38.959987  979198 addons.go:69] Setting storage-provisioner=true in profile "addons-321835"
	I0116 02:01:38.960642  979198 addons.go:234] Setting addon storage-provisioner=true in "addons-321835"
	I0116 02:01:38.959980  979198 addons.go:234] Setting addon inspektor-gadget=true in "addons-321835"
	I0116 02:01:38.959978  979198 host.go:66] Checking if "addons-321835" exists ...
	I0116 02:01:38.960681  979198 addons.go:69] Setting gcp-auth=true in profile "addons-321835"
	I0116 02:01:38.960706  979198 mustload.go:65] Loading cluster: addons-321835
	I0116 02:01:38.960839  979198 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 02:01:38.960886  979198 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 02:01:38.960903  979198 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:01:38.960902  979198 config.go:182] Loaded profile config "addons-321835": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 02:01:38.960919  979198 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:01:38.960950  979198 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 02:01:38.960467  979198 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 02:01:38.960972  979198 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:01:38.960983  979198 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:01:38.960467  979198 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 02:01:38.961012  979198 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:01:38.961016  979198 host.go:66] Checking if "addons-321835" exists ...
	I0116 02:01:38.961036  979198 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 02:01:38.961056  979198 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:01:38.961106  979198 host.go:66] Checking if "addons-321835" exists ...
	I0116 02:01:38.961165  979198 host.go:66] Checking if "addons-321835" exists ...
	I0116 02:01:38.961238  979198 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 02:01:38.961267  979198 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:01:38.980482  979198 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32875
	I0116 02:01:38.980505  979198 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46807
	I0116 02:01:38.980547  979198 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41337
	I0116 02:01:38.980611  979198 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37105
	I0116 02:01:38.981063  979198 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:01:38.981113  979198 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:01:38.981129  979198 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:01:38.981606  979198 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:01:38.981611  979198 main.go:141] libmachine: Using API Version  1
	I0116 02:01:38.981711  979198 main.go:141] libmachine: Using API Version  1
	I0116 02:01:38.981734  979198 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:01:38.981749  979198 main.go:141] libmachine: Using API Version  1
	I0116 02:01:38.981710  979198 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:01:38.981775  979198 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:01:38.982105  979198 main.go:141] libmachine: Using API Version  1
	I0116 02:01:38.982122  979198 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:01:38.982129  979198 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:01:38.982192  979198 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34621
	I0116 02:01:38.982399  979198 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:01:38.982427  979198 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:01:38.982488  979198 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:01:38.982612  979198 main.go:141] libmachine: (addons-321835) Calling .GetState
	I0116 02:01:38.982748  979198 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 02:01:38.982791  979198 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:01:38.982847  979198 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42711
	I0116 02:01:38.983110  979198 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:01:38.983522  979198 main.go:141] libmachine: Using API Version  1
	I0116 02:01:38.983540  979198 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:01:38.983850  979198 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:01:38.986104  979198 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 02:01:38.986136  979198 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:01:38.986246  979198 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 02:01:38.986281  979198 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:01:38.986669  979198 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 02:01:38.986704  979198 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:01:38.988407  979198 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 02:01:38.988433  979198 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:01:38.990432  979198 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 02:01:38.990471  979198 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:01:38.991535  979198 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 02:01:38.991589  979198 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:01:38.998247  979198 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:01:38.998418  979198 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35413
	I0116 02:01:38.999664  979198 addons.go:234] Setting addon default-storageclass=true in "addons-321835"
	I0116 02:01:38.999717  979198 host.go:66] Checking if "addons-321835" exists ...
	I0116 02:01:39.000142  979198 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 02:01:39.000193  979198 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:01:39.004595  979198 main.go:141] libmachine: Using API Version  1
	I0116 02:01:39.004725  979198 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:01:39.005309  979198 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:01:39.006155  979198 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 02:01:39.006203  979198 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:01:39.006260  979198 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:01:39.007032  979198 main.go:141] libmachine: Using API Version  1
	I0116 02:01:39.007056  979198 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:01:39.007731  979198 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:01:39.008527  979198 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 02:01:39.008589  979198 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:01:39.024778  979198 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44103
	I0116 02:01:39.025691  979198 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:01:39.026399  979198 main.go:141] libmachine: Using API Version  1
	I0116 02:01:39.026424  979198 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:01:39.026506  979198 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33823
	I0116 02:01:39.026701  979198 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40549
	I0116 02:01:39.026824  979198 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:01:39.027317  979198 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:01:39.027476  979198 main.go:141] libmachine: Using API Version  1
	I0116 02:01:39.027491  979198 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:01:39.027570  979198 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:01:39.027841  979198 main.go:141] libmachine: (addons-321835) Calling .GetState
	I0116 02:01:39.027978  979198 main.go:141] libmachine: Using API Version  1
	I0116 02:01:39.027991  979198 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:01:39.028042  979198 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:01:39.028649  979198 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 02:01:39.028702  979198 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:01:39.028945  979198 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37459
	I0116 02:01:39.028990  979198 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:01:39.029588  979198 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 02:01:39.029627  979198 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:01:39.029636  979198 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:01:39.030452  979198 main.go:141] libmachine: Using API Version  1
	I0116 02:01:39.030474  979198 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:01:39.030906  979198 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:01:39.031234  979198 main.go:141] libmachine: (addons-321835) Calling .GetState
	I0116 02:01:39.032891  979198 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43279
	I0116 02:01:39.033348  979198 main.go:141] libmachine: (addons-321835) Calling .DriverName
	I0116 02:01:39.033527  979198 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:01:39.036237  979198 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.13
	I0116 02:01:39.033934  979198 main.go:141] libmachine: (addons-321835) Calling .DriverName
	I0116 02:01:39.034462  979198 main.go:141] libmachine: Using API Version  1
	I0116 02:01:39.035326  979198 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42671
	I0116 02:01:39.036764  979198 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42625
	I0116 02:01:39.037750  979198 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44695
	I0116 02:01:39.038404  979198 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0116 02:01:39.038423  979198 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:01:39.038428  979198 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0116 02:01:39.038448  979198 main.go:141] libmachine: (addons-321835) Calling .GetSSHHostname
	I0116 02:01:39.039241  979198 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:01:39.039402  979198 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44547
	I0116 02:01:39.039559  979198 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:01:39.039627  979198 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:01:39.041314  979198 out.go:177]   - Using image docker.io/registry:2.8.3
	I0116 02:01:39.040152  979198 main.go:141] libmachine: Using API Version  1
	I0116 02:01:39.040185  979198 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:01:39.040662  979198 main.go:141] libmachine: Using API Version  1
	I0116 02:01:39.042539  979198 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:01:39.043813  979198 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I0116 02:01:39.042603  979198 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:01:39.042695  979198 main.go:141] libmachine: (addons-321835) DBG | domain addons-321835 has defined MAC address 52:54:00:8e:69:ea in network mk-addons-321835
	I0116 02:01:39.043376  979198 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41665
	I0116 02:01:39.043417  979198 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:01:39.043543  979198 main.go:141] libmachine: Using API Version  1
	I0116 02:01:39.043734  979198 main.go:141] libmachine: (addons-321835) Calling .GetSSHPort
	I0116 02:01:39.044341  979198 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 02:01:39.045188  979198 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:01:39.045387  979198 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0116 02:01:39.045408  979198 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0116 02:01:39.045435  979198 main.go:141] libmachine: (addons-321835) Calling .GetSSHHostname
	I0116 02:01:39.045527  979198 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:01:39.045532  979198 main.go:141] libmachine: (addons-321835) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:69:ea", ip: ""} in network mk-addons-321835: {Iface:virbr1 ExpiryTime:2024-01-16 03:00:56 +0000 UTC Type:0 Mac:52:54:00:8e:69:ea Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-321835 Clientid:01:52:54:00:8e:69:ea}
	I0116 02:01:39.045560  979198 main.go:141] libmachine: (addons-321835) DBG | domain addons-321835 has defined IP address 192.168.39.11 and MAC address 52:54:00:8e:69:ea in network mk-addons-321835
	I0116 02:01:39.044583  979198 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:01:39.045599  979198 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:01:39.045794  979198 main.go:141] libmachine: (addons-321835) Calling .GetSSHKeyPath
	I0116 02:01:39.045877  979198 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:01:39.046009  979198 main.go:141] libmachine: (addons-321835) Calling .GetSSHUsername
	I0116 02:01:39.046061  979198 main.go:141] libmachine: Using API Version  1
	I0116 02:01:39.046075  979198 main.go:141] libmachine: (addons-321835) Calling .GetState
	I0116 02:01:39.046084  979198 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:01:39.047212  979198 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/addons-321835/id_rsa Username:docker}
	I0116 02:01:39.047779  979198 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:01:39.047889  979198 main.go:141] libmachine: (addons-321835) Calling .DriverName
	I0116 02:01:39.048049  979198 main.go:141] libmachine: Using API Version  1
	I0116 02:01:39.048063  979198 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:01:39.048322  979198 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 02:01:39.048360  979198 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:01:39.048371  979198 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39717
	I0116 02:01:39.048714  979198 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 02:01:39.048754  979198 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:01:39.048777  979198 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:01:39.050496  979198 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I0116 02:01:39.049003  979198 main.go:141] libmachine: (addons-321835) Calling .GetState
	I0116 02:01:39.049056  979198 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:01:39.051923  979198 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0116 02:01:39.051940  979198 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0116 02:01:39.051967  979198 main.go:141] libmachine: (addons-321835) Calling .GetSSHHostname
	I0116 02:01:39.052237  979198 main.go:141] libmachine: (addons-321835) DBG | domain addons-321835 has defined MAC address 52:54:00:8e:69:ea in network mk-addons-321835
	I0116 02:01:39.052410  979198 main.go:141] libmachine: Using API Version  1
	I0116 02:01:39.052447  979198 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:01:39.052549  979198 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:01:39.052954  979198 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:01:39.053200  979198 main.go:141] libmachine: (addons-321835) Calling .GetState
	I0116 02:01:39.055189  979198 main.go:141] libmachine: (addons-321835) Calling .DriverName
	I0116 02:01:39.056979  979198 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 02:01:39.056985  979198 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0116 02:01:39.059259  979198 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0116 02:01:39.057037  979198 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:01:39.056330  979198 host.go:66] Checking if "addons-321835" exists ...
	I0116 02:01:39.056597  979198 main.go:141] libmachine: (addons-321835) Calling .GetSSHPort
	I0116 02:01:39.055666  979198 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40947
	I0116 02:01:39.055913  979198 main.go:141] libmachine: (addons-321835) DBG | domain addons-321835 has defined MAC address 52:54:00:8e:69:ea in network mk-addons-321835
	I0116 02:01:39.057086  979198 main.go:141] libmachine: (addons-321835) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:69:ea", ip: ""} in network mk-addons-321835: {Iface:virbr1 ExpiryTime:2024-01-16 03:00:56 +0000 UTC Type:0 Mac:52:54:00:8e:69:ea Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-321835 Clientid:01:52:54:00:8e:69:ea}
	I0116 02:01:39.057338  979198 main.go:141] libmachine: (addons-321835) Calling .GetSSHPort
	I0116 02:01:39.059296  979198 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0116 02:01:39.059335  979198 main.go:141] libmachine: (addons-321835) Calling .GetSSHHostname
	I0116 02:01:39.059879  979198 main.go:141] libmachine: (addons-321835) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:69:ea", ip: ""} in network mk-addons-321835: {Iface:virbr1 ExpiryTime:2024-01-16 03:00:56 +0000 UTC Type:0 Mac:52:54:00:8e:69:ea Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-321835 Clientid:01:52:54:00:8e:69:ea}
	I0116 02:01:39.059905  979198 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 02:01:39.059925  979198 main.go:141] libmachine: (addons-321835) DBG | domain addons-321835 has defined IP address 192.168.39.11 and MAC address 52:54:00:8e:69:ea in network mk-addons-321835
	I0116 02:01:39.059951  979198 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:01:39.059907  979198 main.go:141] libmachine: (addons-321835) DBG | domain addons-321835 has defined IP address 192.168.39.11 and MAC address 52:54:00:8e:69:ea in network mk-addons-321835
	I0116 02:01:39.060182  979198 main.go:141] libmachine: (addons-321835) Calling .GetSSHKeyPath
	I0116 02:01:39.060209  979198 main.go:141] libmachine: (addons-321835) Calling .GetSSHKeyPath
	I0116 02:01:39.060360  979198 main.go:141] libmachine: (addons-321835) Calling .GetSSHUsername
	I0116 02:01:39.060374  979198 main.go:141] libmachine: (addons-321835) Calling .GetSSHUsername
	I0116 02:01:39.060542  979198 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/addons-321835/id_rsa Username:docker}
	I0116 02:01:39.060888  979198 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/addons-321835/id_rsa Username:docker}
	I0116 02:01:39.061142  979198 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:01:39.061681  979198 main.go:141] libmachine: Using API Version  1
	I0116 02:01:39.061710  979198 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:01:39.062216  979198 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:01:39.062398  979198 main.go:141] libmachine: (addons-321835) Calling .GetState
	I0116 02:01:39.063954  979198 main.go:141] libmachine: (addons-321835) DBG | domain addons-321835 has defined MAC address 52:54:00:8e:69:ea in network mk-addons-321835
	I0116 02:01:39.064617  979198 main.go:141] libmachine: (addons-321835) Calling .GetSSHPort
	I0116 02:01:39.064789  979198 main.go:141] libmachine: (addons-321835) Calling .GetSSHKeyPath
	I0116 02:01:39.064937  979198 main.go:141] libmachine: (addons-321835) Calling .GetSSHUsername
	I0116 02:01:39.065078  979198 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/addons-321835/id_rsa Username:docker}
	I0116 02:01:39.065746  979198 main.go:141] libmachine: (addons-321835) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:69:ea", ip: ""} in network mk-addons-321835: {Iface:virbr1 ExpiryTime:2024-01-16 03:00:56 +0000 UTC Type:0 Mac:52:54:00:8e:69:ea Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-321835 Clientid:01:52:54:00:8e:69:ea}
	I0116 02:01:39.065777  979198 main.go:141] libmachine: (addons-321835) DBG | domain addons-321835 has defined IP address 192.168.39.11 and MAC address 52:54:00:8e:69:ea in network mk-addons-321835
	I0116 02:01:39.068438  979198 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35187
	I0116 02:01:39.069033  979198 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:01:39.070055  979198 main.go:141] libmachine: Using API Version  1
	I0116 02:01:39.070076  979198 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:01:39.070768  979198 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:01:39.071372  979198 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 02:01:39.071424  979198 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:01:39.072264  979198 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34255
	I0116 02:01:39.073382  979198 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-321835"
	I0116 02:01:39.073427  979198 host.go:66] Checking if "addons-321835" exists ...
	I0116 02:01:39.073687  979198 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 02:01:39.073729  979198 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:01:39.073972  979198 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41243
	I0116 02:01:39.074130  979198 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33881
	I0116 02:01:39.074710  979198 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:01:39.075283  979198 main.go:141] libmachine: Using API Version  1
	I0116 02:01:39.075302  979198 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:01:39.075685  979198 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:01:39.075879  979198 main.go:141] libmachine: (addons-321835) Calling .GetState
	I0116 02:01:39.076463  979198 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35481
	I0116 02:01:39.076791  979198 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:01:39.077091  979198 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:01:39.077526  979198 main.go:141] libmachine: Using API Version  1
	I0116 02:01:39.077543  979198 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:01:39.078100  979198 main.go:141] libmachine: (addons-321835) Calling .DriverName
	I0116 02:01:39.078165  979198 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:01:39.078283  979198 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:01:39.078382  979198 main.go:141] libmachine: (addons-321835) Calling .GetState
	I0116 02:01:39.080226  979198 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0116 02:01:39.078776  979198 main.go:141] libmachine: Using API Version  1
	I0116 02:01:39.079196  979198 main.go:141] libmachine: Using API Version  1
	I0116 02:01:39.080591  979198 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37241
	I0116 02:01:39.081654  979198 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:01:39.081687  979198 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:01:39.081196  979198 main.go:141] libmachine: (addons-321835) Calling .DriverName
	I0116 02:01:39.083154  979198 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0116 02:01:39.082161  979198 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:01:39.082203  979198 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:01:39.082243  979198 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:01:39.083580  979198 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42749
	I0116 02:01:39.084511  979198 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.23.1
	I0116 02:01:39.086035  979198 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0116 02:01:39.086058  979198 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0116 02:01:39.086081  979198 main.go:141] libmachine: (addons-321835) Calling .GetSSHHostname
	I0116 02:01:39.087649  979198 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0116 02:01:39.084739  979198 main.go:141] libmachine: (addons-321835) Calling .GetState
	I0116 02:01:39.084773  979198 main.go:141] libmachine: (addons-321835) Calling .GetState
	I0116 02:01:39.085364  979198 main.go:141] libmachine: Using API Version  1
	I0116 02:01:39.085413  979198 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39683
	I0116 02:01:39.085453  979198 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:01:39.089002  979198 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:01:39.090615  979198 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0116 02:01:39.089434  979198 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:01:39.089569  979198 main.go:141] libmachine: Using API Version  1
	I0116 02:01:39.090437  979198 main.go:141] libmachine: (addons-321835) DBG | domain addons-321835 has defined MAC address 52:54:00:8e:69:ea in network mk-addons-321835
	I0116 02:01:39.090487  979198 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:01:39.091128  979198 main.go:141] libmachine: (addons-321835) Calling .DriverName
	I0116 02:01:39.092864  979198 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0116 02:01:39.091373  979198 main.go:141] libmachine: (addons-321835) Calling .GetSSHPort
	I0116 02:01:39.091439  979198 main.go:141] libmachine: (addons-321835) Calling .DriverName
	I0116 02:01:39.091827  979198 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:01:39.091864  979198 main.go:141] libmachine: (addons-321835) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:69:ea", ip: ""} in network mk-addons-321835: {Iface:virbr1 ExpiryTime:2024-01-16 03:00:56 +0000 UTC Type:0 Mac:52:54:00:8e:69:ea Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-321835 Clientid:01:52:54:00:8e:69:ea}
	I0116 02:01:39.092004  979198 main.go:141] libmachine: (addons-321835) Calling .GetState
	I0116 02:01:39.092333  979198 main.go:141] libmachine: Using API Version  1
	I0116 02:01:39.093988  979198 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:01:39.095330  979198 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0116 02:01:39.096643  979198 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0116 02:01:39.094443  979198 main.go:141] libmachine: (addons-321835) Calling .GetSSHKeyPath
	I0116 02:01:39.094447  979198 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:01:39.094477  979198 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:01:39.096190  979198 main.go:141] libmachine: (addons-321835) Calling .DriverName
	I0116 02:01:39.096541  979198 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33399
	I0116 02:01:39.096591  979198 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0116 02:01:39.094185  979198 main.go:141] libmachine: (addons-321835) DBG | domain addons-321835 has defined IP address 192.168.39.11 and MAC address 52:54:00:8e:69:ea in network mk-addons-321835
	I0116 02:01:39.097157  979198 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36455
	I0116 02:01:39.098058  979198 main.go:141] libmachine: (addons-321835) Calling .GetState
	I0116 02:01:39.098092  979198 main.go:141] libmachine: (addons-321835) Calling .GetSSHUsername
	I0116 02:01:39.098190  979198 main.go:141] libmachine: (addons-321835) Calling .GetState
	I0116 02:01:39.099026  979198 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.9.5
	I0116 02:01:39.099511  979198 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:01:39.099642  979198 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:01:39.100530  979198 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0116 02:01:39.100716  979198 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/addons-321835/id_rsa Username:docker}
	I0116 02:01:39.101955  979198 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0116 02:01:39.102030  979198 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 02:01:39.104329  979198 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I0116 02:01:39.102533  979198 main.go:141] libmachine: (addons-321835) Calling .DriverName
	I0116 02:01:39.102672  979198 main.go:141] libmachine: Using API Version  1
	I0116 02:01:39.102682  979198 main.go:141] libmachine: Using API Version  1
	I0116 02:01:39.103025  979198 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0116 02:01:39.103257  979198 main.go:141] libmachine: (addons-321835) Calling .DriverName
	I0116 02:01:39.103367  979198 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37505
	I0116 02:01:39.104259  979198 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 02:01:39.105722  979198 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:01:39.105789  979198 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:01:39.105841  979198 main.go:141] libmachine: (addons-321835) Calling .GetSSHHostname
	I0116 02:01:39.107085  979198 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0116 02:01:39.107161  979198 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0116 02:01:39.107731  979198 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:01:39.108226  979198 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I0116 02:01:39.108659  979198 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:01:39.108663  979198 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:01:39.109057  979198 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36571
	I0116 02:01:39.109440  979198 main.go:141] libmachine: (addons-321835) Calling .GetSSHHostname
	I0116 02:01:39.109450  979198 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0116 02:01:39.109462  979198 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0116 02:01:39.110592  979198 main.go:141] libmachine: (addons-321835) Calling .GetSSHHostname
	I0116 02:01:39.110643  979198 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0116 02:01:39.112157  979198 main.go:141] libmachine: (addons-321835) Calling .GetSSHPort
	I0116 02:01:39.113508  979198 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0116 02:01:39.113526  979198 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0116 02:01:39.113541  979198 main.go:141] libmachine: (addons-321835) Calling .GetSSHHostname
	I0116 02:01:39.110818  979198 main.go:141] libmachine: (addons-321835) Calling .GetState
	I0116 02:01:39.110828  979198 main.go:141] libmachine: (addons-321835) Calling .DriverName
	I0116 02:01:39.111320  979198 main.go:141] libmachine: (addons-321835) DBG | domain addons-321835 has defined MAC address 52:54:00:8e:69:ea in network mk-addons-321835
	I0116 02:01:39.116434  979198 main.go:141] libmachine: (addons-321835) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:69:ea", ip: ""} in network mk-addons-321835: {Iface:virbr1 ExpiryTime:2024-01-16 03:00:56 +0000 UTC Type:0 Mac:52:54:00:8e:69:ea Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-321835 Clientid:01:52:54:00:8e:69:ea}
	I0116 02:01:39.116459  979198 main.go:141] libmachine: (addons-321835) DBG | domain addons-321835 has defined IP address 192.168.39.11 and MAC address 52:54:00:8e:69:ea in network mk-addons-321835
	I0116 02:01:39.111350  979198 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:01:39.111411  979198 main.go:141] libmachine: Using API Version  1
	I0116 02:01:39.116534  979198 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:01:39.116563  979198 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0116 02:01:39.112235  979198 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0116 02:01:39.112387  979198 main.go:141] libmachine: (addons-321835) Calling .GetSSHKeyPath
	I0116 02:01:39.114442  979198 main.go:141] libmachine: (addons-321835) DBG | domain addons-321835 has defined MAC address 52:54:00:8e:69:ea in network mk-addons-321835
	I0116 02:01:39.114670  979198 main.go:141] libmachine: (addons-321835) DBG | domain addons-321835 has defined MAC address 52:54:00:8e:69:ea in network mk-addons-321835
	I0116 02:01:39.115273  979198 main.go:141] libmachine: (addons-321835) Calling .GetSSHPort
	I0116 02:01:39.115432  979198 main.go:141] libmachine: (addons-321835) Calling .GetSSHPort
	I0116 02:01:39.116099  979198 main.go:141] libmachine: (addons-321835) Calling .DriverName
	I0116 02:01:39.116642  979198 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16103 bytes)
	I0116 02:01:39.116675  979198 main.go:141] libmachine: (addons-321835) Calling .GetSSHHostname
	I0116 02:01:39.116712  979198 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0116 02:01:39.116731  979198 main.go:141] libmachine: (addons-321835) Calling .GetSSHHostname
	I0116 02:01:39.116798  979198 main.go:141] libmachine: (addons-321835) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:69:ea", ip: ""} in network mk-addons-321835: {Iface:virbr1 ExpiryTime:2024-01-16 03:00:56 +0000 UTC Type:0 Mac:52:54:00:8e:69:ea Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-321835 Clientid:01:52:54:00:8e:69:ea}
	I0116 02:01:39.116825  979198 main.go:141] libmachine: (addons-321835) DBG | domain addons-321835 has defined IP address 192.168.39.11 and MAC address 52:54:00:8e:69:ea in network mk-addons-321835
	I0116 02:01:39.118301  979198 main.go:141] libmachine: (addons-321835) Calling .GetSSHUsername
	I0116 02:01:39.118311  979198 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.3
	I0116 02:01:39.116847  979198 main.go:141] libmachine: (addons-321835) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:69:ea", ip: ""} in network mk-addons-321835: {Iface:virbr1 ExpiryTime:2024-01-16 03:00:56 +0000 UTC Type:0 Mac:52:54:00:8e:69:ea Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-321835 Clientid:01:52:54:00:8e:69:ea}
	I0116 02:01:39.116918  979198 main.go:141] libmachine: (addons-321835) DBG | domain addons-321835 has defined MAC address 52:54:00:8e:69:ea in network mk-addons-321835
	I0116 02:01:39.117029  979198 main.go:141] libmachine: (addons-321835) Calling .GetSSHKeyPath
	I0116 02:01:39.117454  979198 main.go:141] libmachine: Using API Version  1
	I0116 02:01:39.117572  979198 main.go:141] libmachine: (addons-321835) Calling .GetSSHPort
	I0116 02:01:39.117588  979198 main.go:141] libmachine: (addons-321835) Calling .GetSSHKeyPath
	I0116 02:01:39.118163  979198 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:01:39.118562  979198 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/addons-321835/id_rsa Username:docker}
	I0116 02:01:39.119532  979198 main.go:141] libmachine: (addons-321835) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:69:ea", ip: ""} in network mk-addons-321835: {Iface:virbr1 ExpiryTime:2024-01-16 03:00:56 +0000 UTC Type:0 Mac:52:54:00:8e:69:ea Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-321835 Clientid:01:52:54:00:8e:69:ea}
	I0116 02:01:39.119561  979198 main.go:141] libmachine: (addons-321835) DBG | domain addons-321835 has defined IP address 192.168.39.11 and MAC address 52:54:00:8e:69:ea in network mk-addons-321835
	I0116 02:01:39.119571  979198 main.go:141] libmachine: (addons-321835) DBG | domain addons-321835 has defined IP address 192.168.39.11 and MAC address 52:54:00:8e:69:ea in network mk-addons-321835
	I0116 02:01:39.119605  979198 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0116 02:01:39.119618  979198 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0116 02:01:39.119636  979198 main.go:141] libmachine: (addons-321835) Calling .GetSSHHostname
	I0116 02:01:39.119637  979198 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:01:39.120450  979198 main.go:141] libmachine: (addons-321835) Calling .GetSSHUsername
	I0116 02:01:39.120471  979198 main.go:141] libmachine: (addons-321835) Calling .GetState
	I0116 02:01:39.120496  979198 main.go:141] libmachine: (addons-321835) Calling .GetSSHKeyPath
	I0116 02:01:39.120508  979198 main.go:141] libmachine: (addons-321835) Calling .GetSSHUsername
	I0116 02:01:39.120623  979198 main.go:141] libmachine: (addons-321835) Calling .GetSSHUsername
	I0116 02:01:39.120791  979198 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/addons-321835/id_rsa Username:docker}
	I0116 02:01:39.120805  979198 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/addons-321835/id_rsa Username:docker}
	I0116 02:01:39.121261  979198 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/addons-321835/id_rsa Username:docker}
	I0116 02:01:39.122209  979198 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:01:39.122793  979198 main.go:141] libmachine: (addons-321835) Calling .DriverName
	I0116 02:01:39.123038  979198 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0116 02:01:39.123049  979198 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0116 02:01:39.123062  979198 main.go:141] libmachine: (addons-321835) Calling .GetSSHHostname
	I0116 02:01:39.123037  979198 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 02:01:39.123117  979198 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:01:39.123152  979198 main.go:141] libmachine: (addons-321835) DBG | domain addons-321835 has defined MAC address 52:54:00:8e:69:ea in network mk-addons-321835
	I0116 02:01:39.123379  979198 main.go:141] libmachine: (addons-321835) DBG | domain addons-321835 has defined MAC address 52:54:00:8e:69:ea in network mk-addons-321835
	I0116 02:01:39.123849  979198 main.go:141] libmachine: (addons-321835) DBG | domain addons-321835 has defined MAC address 52:54:00:8e:69:ea in network mk-addons-321835
	I0116 02:01:39.124255  979198 main.go:141] libmachine: (addons-321835) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:69:ea", ip: ""} in network mk-addons-321835: {Iface:virbr1 ExpiryTime:2024-01-16 03:00:56 +0000 UTC Type:0 Mac:52:54:00:8e:69:ea Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-321835 Clientid:01:52:54:00:8e:69:ea}
	I0116 02:01:39.124291  979198 main.go:141] libmachine: (addons-321835) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:69:ea", ip: ""} in network mk-addons-321835: {Iface:virbr1 ExpiryTime:2024-01-16 03:00:56 +0000 UTC Type:0 Mac:52:54:00:8e:69:ea Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-321835 Clientid:01:52:54:00:8e:69:ea}
	I0116 02:01:39.124316  979198 main.go:141] libmachine: (addons-321835) DBG | domain addons-321835 has defined IP address 192.168.39.11 and MAC address 52:54:00:8e:69:ea in network mk-addons-321835
	I0116 02:01:39.124454  979198 main.go:141] libmachine: (addons-321835) Calling .GetSSHPort
	I0116 02:01:39.124520  979198 main.go:141] libmachine: (addons-321835) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:69:ea", ip: ""} in network mk-addons-321835: {Iface:virbr1 ExpiryTime:2024-01-16 03:00:56 +0000 UTC Type:0 Mac:52:54:00:8e:69:ea Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-321835 Clientid:01:52:54:00:8e:69:ea}
	I0116 02:01:39.124548  979198 main.go:141] libmachine: (addons-321835) DBG | domain addons-321835 has defined IP address 192.168.39.11 and MAC address 52:54:00:8e:69:ea in network mk-addons-321835
	I0116 02:01:39.124701  979198 main.go:141] libmachine: (addons-321835) Calling .GetSSHKeyPath
	I0116 02:01:39.124739  979198 main.go:141] libmachine: (addons-321835) DBG | domain addons-321835 has defined IP address 192.168.39.11 and MAC address 52:54:00:8e:69:ea in network mk-addons-321835
	I0116 02:01:39.124849  979198 main.go:141] libmachine: (addons-321835) Calling .GetSSHUsername
	I0116 02:01:39.125014  979198 main.go:141] libmachine: (addons-321835) Calling .GetSSHPort
	I0116 02:01:39.125011  979198 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/addons-321835/id_rsa Username:docker}
	I0116 02:01:39.125256  979198 main.go:141] libmachine: (addons-321835) Calling .GetSSHPort
	I0116 02:01:39.125633  979198 main.go:141] libmachine: (addons-321835) Calling .GetSSHKeyPath
	I0116 02:01:39.125743  979198 main.go:141] libmachine: (addons-321835) Calling .GetSSHKeyPath
	I0116 02:01:39.125906  979198 main.go:141] libmachine: (addons-321835) Calling .GetSSHUsername
	I0116 02:01:39.125943  979198 main.go:141] libmachine: (addons-321835) Calling .GetSSHUsername
	I0116 02:01:39.126183  979198 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/addons-321835/id_rsa Username:docker}
	I0116 02:01:39.126695  979198 main.go:141] libmachine: (addons-321835) DBG | domain addons-321835 has defined MAC address 52:54:00:8e:69:ea in network mk-addons-321835
	I0116 02:01:39.126709  979198 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/addons-321835/id_rsa Username:docker}
	I0116 02:01:39.126788  979198 main.go:141] libmachine: (addons-321835) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:69:ea", ip: ""} in network mk-addons-321835: {Iface:virbr1 ExpiryTime:2024-01-16 03:00:56 +0000 UTC Type:0 Mac:52:54:00:8e:69:ea Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-321835 Clientid:01:52:54:00:8e:69:ea}
	I0116 02:01:39.126819  979198 main.go:141] libmachine: (addons-321835) DBG | domain addons-321835 has defined IP address 192.168.39.11 and MAC address 52:54:00:8e:69:ea in network mk-addons-321835
	I0116 02:01:39.126988  979198 main.go:141] libmachine: (addons-321835) Calling .GetSSHPort
	I0116 02:01:39.127138  979198 main.go:141] libmachine: (addons-321835) Calling .GetSSHKeyPath
	I0116 02:01:39.127275  979198 main.go:141] libmachine: (addons-321835) Calling .GetSSHUsername
	I0116 02:01:39.127390  979198 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/addons-321835/id_rsa Username:docker}
	W0116 02:01:39.128669  979198 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:44290->192.168.39.11:22: read: connection reset by peer
	I0116 02:01:39.128710  979198 retry.go:31] will retry after 280.11462ms: ssh: handshake failed: read tcp 192.168.39.1:44290->192.168.39.11:22: read: connection reset by peer
	W0116 02:01:39.129105  979198 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:44310->192.168.39.11:22: read: connection reset by peer
	I0116 02:01:39.129127  979198 retry.go:31] will retry after 290.674794ms: ssh: handshake failed: read tcp 192.168.39.1:44310->192.168.39.11:22: read: connection reset by peer
	I0116 02:01:39.139465  979198 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35417
	I0116 02:01:39.139867  979198 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:01:39.140334  979198 main.go:141] libmachine: Using API Version  1
	I0116 02:01:39.140382  979198 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:01:39.140702  979198 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:01:39.140889  979198 main.go:141] libmachine: (addons-321835) Calling .GetState
	I0116 02:01:39.142548  979198 main.go:141] libmachine: (addons-321835) Calling .DriverName
	I0116 02:01:39.144862  979198 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0116 02:01:39.146220  979198 out.go:177]   - Using image docker.io/busybox:stable
	I0116 02:01:39.147554  979198 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0116 02:01:39.147574  979198 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0116 02:01:39.147598  979198 main.go:141] libmachine: (addons-321835) Calling .GetSSHHostname
	I0116 02:01:39.151115  979198 main.go:141] libmachine: (addons-321835) DBG | domain addons-321835 has defined MAC address 52:54:00:8e:69:ea in network mk-addons-321835
	I0116 02:01:39.151546  979198 main.go:141] libmachine: (addons-321835) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:69:ea", ip: ""} in network mk-addons-321835: {Iface:virbr1 ExpiryTime:2024-01-16 03:00:56 +0000 UTC Type:0 Mac:52:54:00:8e:69:ea Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-321835 Clientid:01:52:54:00:8e:69:ea}
	I0116 02:01:39.151573  979198 main.go:141] libmachine: (addons-321835) DBG | domain addons-321835 has defined IP address 192.168.39.11 and MAC address 52:54:00:8e:69:ea in network mk-addons-321835
	I0116 02:01:39.151785  979198 main.go:141] libmachine: (addons-321835) Calling .GetSSHPort
	I0116 02:01:39.152048  979198 main.go:141] libmachine: (addons-321835) Calling .GetSSHKeyPath
	I0116 02:01:39.152220  979198 main.go:141] libmachine: (addons-321835) Calling .GetSSHUsername
	I0116 02:01:39.152374  979198 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/addons-321835/id_rsa Username:docker}
	I0116 02:01:39.307862  979198 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0116 02:01:39.313151  979198 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0116 02:01:39.313174  979198 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0116 02:01:39.339320  979198 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0116 02:01:39.339355  979198 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0116 02:01:39.349762  979198 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0116 02:01:39.349813  979198 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0116 02:01:39.378694  979198 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0116 02:01:39.378723  979198 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0116 02:01:39.390170  979198 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0116 02:01:39.419464  979198 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0116 02:01:39.419491  979198 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0116 02:01:39.505036  979198 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0116 02:01:39.505074  979198 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0116 02:01:39.511677  979198 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0116 02:01:39.539893  979198 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0116 02:01:39.539939  979198 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0116 02:01:39.541475  979198 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0116 02:01:39.547382  979198 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0116 02:01:39.547406  979198 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0116 02:01:39.554270  979198 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0116 02:01:39.554296  979198 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0116 02:01:39.555717  979198 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 02:01:39.564420  979198 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0116 02:01:39.564488  979198 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0116 02:01:39.567968  979198 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0116 02:01:39.567993  979198 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0116 02:01:39.588988  979198 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0116 02:01:39.589014  979198 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0116 02:01:39.622384  979198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0116 02:01:39.663245  979198 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-321835" context rescaled to 1 replicas
	I0116 02:01:39.663309  979198 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.11 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0116 02:01:39.667159  979198 out.go:177] * Verifying Kubernetes components...
	I0116 02:01:39.668726  979198 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 02:01:39.741503  979198 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0116 02:01:39.741537  979198 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0116 02:01:39.834553  979198 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0116 02:01:39.843108  979198 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0116 02:01:39.843136  979198 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0116 02:01:39.874764  979198 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0116 02:01:39.880249  979198 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0116 02:01:39.880282  979198 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0116 02:01:39.889632  979198 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0116 02:01:39.889671  979198 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0116 02:01:39.891432  979198 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0116 02:01:39.906001  979198 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0116 02:01:39.906032  979198 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0116 02:01:39.914179  979198 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0116 02:01:39.944984  979198 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0116 02:01:40.110800  979198 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0116 02:01:40.110833  979198 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0116 02:01:40.132789  979198 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0116 02:01:40.132827  979198 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0116 02:01:40.158995  979198 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0116 02:01:40.159016  979198 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0116 02:01:40.164773  979198 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0116 02:01:40.164797  979198 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0116 02:01:40.274166  979198 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0116 02:01:40.274198  979198 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0116 02:01:40.275812  979198 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0116 02:01:40.275833  979198 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0116 02:01:40.299984  979198 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0116 02:01:40.300016  979198 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0116 02:01:40.309917  979198 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0116 02:01:40.359692  979198 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0116 02:01:40.359730  979198 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0116 02:01:40.383140  979198 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0116 02:01:40.405347  979198 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0116 02:01:40.405390  979198 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0116 02:01:40.473206  979198 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0116 02:01:40.473229  979198 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0116 02:01:40.516064  979198 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0116 02:01:40.516095  979198 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0116 02:01:40.553582  979198 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0116 02:01:40.553611  979198 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0116 02:01:40.584835  979198 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0116 02:01:40.584859  979198 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0116 02:01:40.631966  979198 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0116 02:01:40.631999  979198 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0116 02:01:40.634885  979198 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0116 02:01:40.696637  979198 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0116 02:01:40.696684  979198 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0116 02:01:40.763510  979198 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0116 02:01:40.763543  979198 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0116 02:01:40.809339  979198 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0116 02:01:45.660345  979198 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (6.352432942s)
	I0116 02:01:45.660435  979198 main.go:141] libmachine: Making call to close driver server
	I0116 02:01:45.660455  979198 main.go:141] libmachine: (addons-321835) Calling .Close
	I0116 02:01:45.660904  979198 main.go:141] libmachine: Successfully made call to close driver server
	I0116 02:01:45.660932  979198 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 02:01:45.660928  979198 main.go:141] libmachine: (addons-321835) DBG | Closing plugin on server side
	I0116 02:01:45.660947  979198 main.go:141] libmachine: Making call to close driver server
	I0116 02:01:45.660962  979198 main.go:141] libmachine: (addons-321835) Calling .Close
	I0116 02:01:45.661215  979198 main.go:141] libmachine: Successfully made call to close driver server
	I0116 02:01:45.661237  979198 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 02:01:46.207490  979198 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (6.817283314s)
	I0116 02:01:46.207547  979198 main.go:141] libmachine: Making call to close driver server
	I0116 02:01:46.207576  979198 main.go:141] libmachine: (addons-321835) Calling .Close
	I0116 02:01:46.208055  979198 main.go:141] libmachine: (addons-321835) DBG | Closing plugin on server side
	I0116 02:01:46.208121  979198 main.go:141] libmachine: Successfully made call to close driver server
	I0116 02:01:46.208135  979198 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 02:01:46.208150  979198 main.go:141] libmachine: Making call to close driver server
	I0116 02:01:46.208160  979198 main.go:141] libmachine: (addons-321835) Calling .Close
	I0116 02:01:46.208545  979198 main.go:141] libmachine: Successfully made call to close driver server
	I0116 02:01:46.208563  979198 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 02:01:46.968949  979198 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (7.427430884s)
	I0116 02:01:46.969031  979198 main.go:141] libmachine: Making call to close driver server
	I0116 02:01:46.969050  979198 main.go:141] libmachine: (addons-321835) Calling .Close
	I0116 02:01:46.969215  979198 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.457496801s)
	I0116 02:01:46.969267  979198 main.go:141] libmachine: Making call to close driver server
	I0116 02:01:46.969282  979198 main.go:141] libmachine: (addons-321835) Calling .Close
	I0116 02:01:46.969757  979198 main.go:141] libmachine: (addons-321835) DBG | Closing plugin on server side
	I0116 02:01:46.969759  979198 main.go:141] libmachine: (addons-321835) DBG | Closing plugin on server side
	I0116 02:01:46.969771  979198 main.go:141] libmachine: Successfully made call to close driver server
	I0116 02:01:46.969778  979198 main.go:141] libmachine: Successfully made call to close driver server
	I0116 02:01:46.969792  979198 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 02:01:46.969811  979198 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 02:01:46.969817  979198 main.go:141] libmachine: Making call to close driver server
	I0116 02:01:46.969837  979198 main.go:141] libmachine: (addons-321835) Calling .Close
	I0116 02:01:46.969823  979198 main.go:141] libmachine: Making call to close driver server
	I0116 02:01:46.969875  979198 main.go:141] libmachine: (addons-321835) Calling .Close
	I0116 02:01:46.970211  979198 main.go:141] libmachine: Successfully made call to close driver server
	I0116 02:01:46.970234  979198 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 02:01:46.971530  979198 main.go:141] libmachine: (addons-321835) DBG | Closing plugin on server side
	I0116 02:01:46.971552  979198 main.go:141] libmachine: Successfully made call to close driver server
	I0116 02:01:46.971569  979198 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 02:01:47.085049  979198 main.go:141] libmachine: Making call to close driver server
	I0116 02:01:47.085079  979198 main.go:141] libmachine: (addons-321835) Calling .Close
	I0116 02:01:47.085389  979198 main.go:141] libmachine: Successfully made call to close driver server
	I0116 02:01:47.085413  979198 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 02:01:47.085433  979198 main.go:141] libmachine: (addons-321835) DBG | Closing plugin on server side
	I0116 02:01:47.085613  979198 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0116 02:01:47.085642  979198 main.go:141] libmachine: (addons-321835) Calling .GetSSHHostname
	I0116 02:01:47.089261  979198 main.go:141] libmachine: (addons-321835) DBG | domain addons-321835 has defined MAC address 52:54:00:8e:69:ea in network mk-addons-321835
	I0116 02:01:47.089677  979198 main.go:141] libmachine: (addons-321835) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:69:ea", ip: ""} in network mk-addons-321835: {Iface:virbr1 ExpiryTime:2024-01-16 03:00:56 +0000 UTC Type:0 Mac:52:54:00:8e:69:ea Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-321835 Clientid:01:52:54:00:8e:69:ea}
	I0116 02:01:47.089733  979198 main.go:141] libmachine: (addons-321835) DBG | domain addons-321835 has defined IP address 192.168.39.11 and MAC address 52:54:00:8e:69:ea in network mk-addons-321835
	I0116 02:01:47.089975  979198 main.go:141] libmachine: (addons-321835) Calling .GetSSHPort
	I0116 02:01:47.090234  979198 main.go:141] libmachine: (addons-321835) Calling .GetSSHKeyPath
	I0116 02:01:47.090429  979198 main.go:141] libmachine: (addons-321835) Calling .GetSSHUsername
	I0116 02:01:47.090598  979198 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/addons-321835/id_rsa Username:docker}
	I0116 02:01:47.112968  979198 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.557208799s)
	I0116 02:01:47.113028  979198 main.go:141] libmachine: Making call to close driver server
	I0116 02:01:47.113051  979198 main.go:141] libmachine: (addons-321835) Calling .Close
	I0116 02:01:47.113082  979198 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (7.49065044s)
	I0116 02:01:47.113122  979198 start.go:929] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0116 02:01:47.113180  979198 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (7.444415597s)
	I0116 02:01:47.113225  979198 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.278638774s)
	I0116 02:01:47.113265  979198 main.go:141] libmachine: Making call to close driver server
	I0116 02:01:47.113280  979198 main.go:141] libmachine: (addons-321835) Calling .Close
	I0116 02:01:47.113293  979198 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (7.238493882s)
	I0116 02:01:47.113327  979198 main.go:141] libmachine: Making call to close driver server
	I0116 02:01:47.113378  979198 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.221920389s)
	I0116 02:01:47.113403  979198 main.go:141] libmachine: Making call to close driver server
	I0116 02:01:47.113412  979198 main.go:141] libmachine: (addons-321835) Calling .Close
	I0116 02:01:47.113825  979198 main.go:141] libmachine: (addons-321835) Calling .Close
	I0116 02:01:47.114305  979198 node_ready.go:35] waiting up to 6m0s for node "addons-321835" to be "Ready" ...
	I0116 02:01:47.115594  979198 main.go:141] libmachine: (addons-321835) DBG | Closing plugin on server side
	I0116 02:01:47.115599  979198 main.go:141] libmachine: (addons-321835) DBG | Closing plugin on server side
	I0116 02:01:47.115617  979198 main.go:141] libmachine: Successfully made call to close driver server
	I0116 02:01:47.115640  979198 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 02:01:47.115645  979198 main.go:141] libmachine: Successfully made call to close driver server
	I0116 02:01:47.115651  979198 main.go:141] libmachine: (addons-321835) DBG | Closing plugin on server side
	I0116 02:01:47.115655  979198 main.go:141] libmachine: Making call to close driver server
	I0116 02:01:47.115658  979198 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 02:01:47.115602  979198 main.go:141] libmachine: Successfully made call to close driver server
	I0116 02:01:47.115665  979198 main.go:141] libmachine: (addons-321835) Calling .Close
	I0116 02:01:47.115670  979198 main.go:141] libmachine: Making call to close driver server
	I0116 02:01:47.115670  979198 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 02:01:47.115678  979198 main.go:141] libmachine: (addons-321835) Calling .Close
	I0116 02:01:47.115679  979198 main.go:141] libmachine: Successfully made call to close driver server
	I0116 02:01:47.115691  979198 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 02:01:47.115702  979198 main.go:141] libmachine: Making call to close driver server
	I0116 02:01:47.115710  979198 main.go:141] libmachine: (addons-321835) Calling .Close
	I0116 02:01:47.115681  979198 main.go:141] libmachine: Making call to close driver server
	I0116 02:01:47.115772  979198 main.go:141] libmachine: (addons-321835) Calling .Close
	I0116 02:01:47.116104  979198 main.go:141] libmachine: (addons-321835) DBG | Closing plugin on server side
	I0116 02:01:47.116111  979198 main.go:141] libmachine: Successfully made call to close driver server
	I0116 02:01:47.116125  979198 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 02:01:47.116126  979198 main.go:141] libmachine: (addons-321835) DBG | Closing plugin on server side
	I0116 02:01:47.116141  979198 main.go:141] libmachine: (addons-321835) DBG | Closing plugin on server side
	I0116 02:01:47.116168  979198 main.go:141] libmachine: Successfully made call to close driver server
	I0116 02:01:47.116177  979198 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 02:01:47.116188  979198 addons.go:470] Verifying addon registry=true in "addons-321835"
	I0116 02:01:47.116207  979198 main.go:141] libmachine: (addons-321835) DBG | Closing plugin on server side
	I0116 02:01:47.116234  979198 main.go:141] libmachine: Successfully made call to close driver server
	I0116 02:01:47.116242  979198 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 02:01:47.118413  979198 out.go:177] * Verifying registry addon...
	I0116 02:01:47.116292  979198 main.go:141] libmachine: Successfully made call to close driver server
	I0116 02:01:47.120210  979198 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 02:01:47.120931  979198 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0116 02:01:47.270525  979198 node_ready.go:49] node "addons-321835" has status "Ready":"True"
	I0116 02:01:47.270553  979198 node_ready.go:38] duration metric: took 156.219717ms waiting for node "addons-321835" to be "Ready" ...
	I0116 02:01:47.270563  979198 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 02:01:47.328476  979198 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0116 02:01:47.348123  979198 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0116 02:01:47.348154  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:01:47.382573  979198 addons.go:234] Setting addon gcp-auth=true in "addons-321835"
	I0116 02:01:47.382644  979198 host.go:66] Checking if "addons-321835" exists ...
	I0116 02:01:47.382985  979198 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 02:01:47.383023  979198 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:01:47.398508  979198 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34113
	I0116 02:01:47.398978  979198 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:01:47.399486  979198 main.go:141] libmachine: Using API Version  1
	I0116 02:01:47.399505  979198 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:01:47.399927  979198 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:01:47.400528  979198 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 02:01:47.400569  979198 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:01:47.415672  979198 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41973
	I0116 02:01:47.416180  979198 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:01:47.416739  979198 main.go:141] libmachine: Using API Version  1
	I0116 02:01:47.416769  979198 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:01:47.417103  979198 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:01:47.417336  979198 main.go:141] libmachine: (addons-321835) Calling .GetState
	I0116 02:01:47.418907  979198 main.go:141] libmachine: (addons-321835) Calling .DriverName
	I0116 02:01:47.419164  979198 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0116 02:01:47.419186  979198 main.go:141] libmachine: (addons-321835) Calling .GetSSHHostname
	I0116 02:01:47.421687  979198 main.go:141] libmachine: (addons-321835) DBG | domain addons-321835 has defined MAC address 52:54:00:8e:69:ea in network mk-addons-321835
	I0116 02:01:47.422133  979198 main.go:141] libmachine: (addons-321835) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:69:ea", ip: ""} in network mk-addons-321835: {Iface:virbr1 ExpiryTime:2024-01-16 03:00:56 +0000 UTC Type:0 Mac:52:54:00:8e:69:ea Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-321835 Clientid:01:52:54:00:8e:69:ea}
	I0116 02:01:47.422163  979198 main.go:141] libmachine: (addons-321835) DBG | domain addons-321835 has defined IP address 192.168.39.11 and MAC address 52:54:00:8e:69:ea in network mk-addons-321835
	I0116 02:01:47.422292  979198 main.go:141] libmachine: (addons-321835) Calling .GetSSHPort
	I0116 02:01:47.422513  979198 main.go:141] libmachine: (addons-321835) Calling .GetSSHKeyPath
	I0116 02:01:47.422697  979198 main.go:141] libmachine: (addons-321835) Calling .GetSSHUsername
	I0116 02:01:47.422863  979198 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/addons-321835/id_rsa Username:docker}
	I0116 02:01:47.483866  979198 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-hwrh2" in "kube-system" namespace to be "Ready" ...
	I0116 02:01:47.486634  979198 main.go:141] libmachine: Making call to close driver server
	I0116 02:01:47.486657  979198 main.go:141] libmachine: (addons-321835) Calling .Close
	I0116 02:01:47.486990  979198 main.go:141] libmachine: Successfully made call to close driver server
	I0116 02:01:47.487009  979198 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 02:01:47.640519  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:01:48.185636  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:01:48.734972  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:01:48.828097  979198 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.913870442s)
	I0116 02:01:48.828186  979198 main.go:141] libmachine: Making call to close driver server
	I0116 02:01:48.828197  979198 main.go:141] libmachine: (addons-321835) Calling .Close
	I0116 02:01:48.828227  979198 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.883204327s)
	I0116 02:01:48.828280  979198 main.go:141] libmachine: Making call to close driver server
	I0116 02:01:48.828310  979198 main.go:141] libmachine: (addons-321835) Calling .Close
	I0116 02:01:48.828325  979198 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (8.518364993s)
	I0116 02:01:48.828348  979198 main.go:141] libmachine: Making call to close driver server
	I0116 02:01:48.828391  979198 main.go:141] libmachine: (addons-321835) Calling .Close
	I0116 02:01:48.828501  979198 main.go:141] libmachine: Successfully made call to close driver server
	I0116 02:01:48.828515  979198 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 02:01:48.828525  979198 main.go:141] libmachine: Making call to close driver server
	I0116 02:01:48.828533  979198 main.go:141] libmachine: (addons-321835) Calling .Close
	I0116 02:01:48.828548  979198 main.go:141] libmachine: (addons-321835) DBG | Closing plugin on server side
	I0116 02:01:48.828579  979198 main.go:141] libmachine: Successfully made call to close driver server
	I0116 02:01:48.828587  979198 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 02:01:48.828596  979198 main.go:141] libmachine: Making call to close driver server
	I0116 02:01:48.828604  979198 main.go:141] libmachine: (addons-321835) Calling .Close
	I0116 02:01:48.828645  979198 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (8.445459601s)
	W0116 02:01:48.828706  979198 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0116 02:01:48.828735  979198 retry.go:31] will retry after 206.350953ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0116 02:01:48.828807  979198 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (8.193894244s)
	I0116 02:01:48.828832  979198 main.go:141] libmachine: Making call to close driver server
	I0116 02:01:48.828848  979198 main.go:141] libmachine: (addons-321835) Calling .Close
	I0116 02:01:48.828914  979198 main.go:141] libmachine: Successfully made call to close driver server
	I0116 02:01:48.828932  979198 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 02:01:48.828980  979198 addons.go:470] Verifying addon ingress=true in "addons-321835"
	I0116 02:01:48.829030  979198 main.go:141] libmachine: Successfully made call to close driver server
	I0116 02:01:48.829041  979198 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 02:01:48.829051  979198 main.go:141] libmachine: Making call to close driver server
	I0116 02:01:48.831077  979198 out.go:177] * Verifying ingress addon...
	I0116 02:01:48.829059  979198 main.go:141] libmachine: (addons-321835) Calling .Close
	I0116 02:01:48.829354  979198 main.go:141] libmachine: (addons-321835) DBG | Closing plugin on server side
	I0116 02:01:48.829387  979198 main.go:141] libmachine: Successfully made call to close driver server
	I0116 02:01:48.829412  979198 main.go:141] libmachine: Successfully made call to close driver server
	I0116 02:01:48.829421  979198 main.go:141] libmachine: (addons-321835) DBG | Closing plugin on server side
	I0116 02:01:48.831128  979198 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 02:01:48.832804  979198 main.go:141] libmachine: Making call to close driver server
	I0116 02:01:48.832818  979198 main.go:141] libmachine: (addons-321835) Calling .Close
	I0116 02:01:48.831136  979198 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 02:01:48.831365  979198 main.go:141] libmachine: Successfully made call to close driver server
	I0116 02:01:48.831415  979198 main.go:141] libmachine: (addons-321835) DBG | Closing plugin on server side
	I0116 02:01:48.832890  979198 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 02:01:48.834367  979198 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-321835 service yakd-dashboard -n yakd-dashboard
	
	I0116 02:01:48.832910  979198 addons.go:470] Verifying addon metrics-server=true in "addons-321835"
	I0116 02:01:48.833076  979198 main.go:141] libmachine: (addons-321835) DBG | Closing plugin on server side
	I0116 02:01:48.833080  979198 main.go:141] libmachine: Successfully made call to close driver server
	I0116 02:01:48.833848  979198 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0116 02:01:48.835710  979198 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 02:01:48.856034  979198 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0116 02:01:48.856071  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:01:49.036170  979198 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0116 02:01:49.160835  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:01:49.345594  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:01:49.534930  979198 pod_ready.go:102] pod "coredns-5dd5756b68-hwrh2" in "kube-system" namespace has status "Ready":"False"
	I0116 02:01:49.761177  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:01:49.871345  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:01:49.899809  979198 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (9.090388012s)
	I0116 02:01:49.899874  979198 main.go:141] libmachine: Making call to close driver server
	I0116 02:01:49.899891  979198 main.go:141] libmachine: (addons-321835) Calling .Close
	I0116 02:01:49.899892  979198 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.480701238s)
	I0116 02:01:49.902280  979198 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I0116 02:01:49.900281  979198 main.go:141] libmachine: Successfully made call to close driver server
	I0116 02:01:49.900315  979198 main.go:141] libmachine: (addons-321835) DBG | Closing plugin on server side
	I0116 02:01:49.903878  979198 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 02:01:49.903915  979198 main.go:141] libmachine: Making call to close driver server
	I0116 02:01:49.903937  979198 main.go:141] libmachine: (addons-321835) Calling .Close
	I0116 02:01:49.905230  979198 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I0116 02:01:49.906584  979198 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0116 02:01:49.904320  979198 main.go:141] libmachine: Successfully made call to close driver server
	I0116 02:01:49.906649  979198 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 02:01:49.906674  979198 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-321835"
	I0116 02:01:49.904372  979198 main.go:141] libmachine: (addons-321835) DBG | Closing plugin on server side
	I0116 02:01:49.908313  979198 out.go:177] * Verifying csi-hostpath-driver addon...
	I0116 02:01:49.906613  979198 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0116 02:01:49.911851  979198 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0116 02:01:49.931101  979198 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0116 02:01:49.931142  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:01:49.976373  979198 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0116 02:01:49.976407  979198 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0116 02:01:50.025958  979198 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0116 02:01:50.025993  979198 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5432 bytes)
	I0116 02:01:50.052526  979198 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0116 02:01:50.133872  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:01:50.364670  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:01:50.421323  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:01:50.633557  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:01:50.854228  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:01:50.926476  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:01:51.152812  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:01:51.373720  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:01:51.428643  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:01:51.514973  979198 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.478749271s)
	I0116 02:01:51.515039  979198 main.go:141] libmachine: Making call to close driver server
	I0116 02:01:51.515058  979198 main.go:141] libmachine: (addons-321835) Calling .Close
	I0116 02:01:51.515334  979198 main.go:141] libmachine: Successfully made call to close driver server
	I0116 02:01:51.515353  979198 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 02:01:51.515364  979198 main.go:141] libmachine: Making call to close driver server
	I0116 02:01:51.515372  979198 main.go:141] libmachine: (addons-321835) Calling .Close
	I0116 02:01:51.515672  979198 main.go:141] libmachine: Successfully made call to close driver server
	I0116 02:01:51.515694  979198 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 02:01:51.630112  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:01:51.888119  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:01:51.940062  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:01:52.091508  979198 pod_ready.go:102] pod "coredns-5dd5756b68-hwrh2" in "kube-system" namespace has status "Ready":"False"
	I0116 02:01:52.094104  979198 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (2.041522548s)
	I0116 02:01:52.094160  979198 main.go:141] libmachine: Making call to close driver server
	I0116 02:01:52.094178  979198 main.go:141] libmachine: (addons-321835) Calling .Close
	I0116 02:01:52.094484  979198 main.go:141] libmachine: Successfully made call to close driver server
	I0116 02:01:52.094547  979198 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 02:01:52.094565  979198 main.go:141] libmachine: Making call to close driver server
	I0116 02:01:52.094575  979198 main.go:141] libmachine: (addons-321835) Calling .Close
	I0116 02:01:52.094877  979198 main.go:141] libmachine: Successfully made call to close driver server
	I0116 02:01:52.094903  979198 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 02:01:52.094914  979198 main.go:141] libmachine: (addons-321835) DBG | Closing plugin on server side
	I0116 02:01:52.096905  979198 addons.go:470] Verifying addon gcp-auth=true in "addons-321835"
	I0116 02:01:52.099012  979198 out.go:177] * Verifying gcp-auth addon...
	I0116 02:01:52.100981  979198 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0116 02:01:52.121270  979198 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0116 02:01:52.121307  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:01:52.179757  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:01:52.346058  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:01:52.423135  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:01:52.630897  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:01:52.635198  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:01:52.846561  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:01:52.930500  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:01:53.105004  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:01:53.129690  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:01:53.346197  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:01:53.422264  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:01:53.605624  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:01:53.628841  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:01:53.843084  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:01:53.919974  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:01:54.108813  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:01:54.132012  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:01:54.347247  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:01:54.419111  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:01:54.517827  979198 pod_ready.go:102] pod "coredns-5dd5756b68-hwrh2" in "kube-system" namespace has status "Ready":"False"
	I0116 02:01:54.606828  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:01:54.627269  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:01:54.842640  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:01:54.923792  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:01:55.108960  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:01:55.132272  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:01:55.350825  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:01:55.422202  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:01:55.605415  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:01:55.630263  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:01:55.848455  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:01:55.932710  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:01:56.105698  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:01:56.126824  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:01:56.341181  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:01:56.427892  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:01:56.606396  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:01:56.627820  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:01:56.849119  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:01:56.926253  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:01:56.991278  979198 pod_ready.go:102] pod "coredns-5dd5756b68-hwrh2" in "kube-system" namespace has status "Ready":"False"
	I0116 02:01:57.104998  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:01:57.128247  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:01:57.340751  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:01:57.419351  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:01:57.604857  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:01:57.626175  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:01:57.849176  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:01:57.925367  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:01:58.104877  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:01:58.127418  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:01:58.346552  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:01:58.761918  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:01:58.762594  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:01:58.771024  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:01:58.844486  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:01:58.922742  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:01:58.992697  979198 pod_ready.go:102] pod "coredns-5dd5756b68-hwrh2" in "kube-system" namespace has status "Ready":"False"
	I0116 02:01:59.106031  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:01:59.126522  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:01:59.342654  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:01:59.419203  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:01:59.605431  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:01:59.627491  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:01:59.845868  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:01:59.939562  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:02:00.105328  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:02:00.126001  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:02:00.340978  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:02:00.428318  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:02:00.605201  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:02:00.627269  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:02:00.842855  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:02:00.917116  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:02:00.994326  979198 pod_ready.go:102] pod "coredns-5dd5756b68-hwrh2" in "kube-system" namespace has status "Ready":"False"
	I0116 02:02:01.106639  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:02:01.139058  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:02:01.340246  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:02:01.417865  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:02:01.608310  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:02:01.641654  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:02:01.849488  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:02:01.924235  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:02:02.114555  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:02:02.154385  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:02:02.343288  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:02:02.422899  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:02:02.608327  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:02:02.631544  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:02:03.196328  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:02:03.207230  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:02:03.207401  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:02:03.210305  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:02:03.212006  979198 pod_ready.go:102] pod "coredns-5dd5756b68-hwrh2" in "kube-system" namespace has status "Ready":"False"
	I0116 02:02:03.340573  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:02:03.420653  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:02:03.607136  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:02:03.631318  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:02:03.840977  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:02:03.917978  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:02:04.105596  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:02:04.127634  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:02:04.344416  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:02:04.429464  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:02:04.605048  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:02:04.626490  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:02:04.841579  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:02:04.918900  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:02:05.107217  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:02:05.137204  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:02:05.349587  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:02:05.417642  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:02:05.491001  979198 pod_ready.go:102] pod "coredns-5dd5756b68-hwrh2" in "kube-system" namespace has status "Ready":"False"
	I0116 02:02:05.610001  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:02:05.627063  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:02:05.842320  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:02:05.923124  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:02:06.104917  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:02:06.141144  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:02:06.340753  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:02:06.440173  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:02:06.619932  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:02:06.626479  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:02:06.840999  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:02:06.918208  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:02:07.105021  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:02:07.127573  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:02:07.342388  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:02:07.417523  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:02:07.492244  979198 pod_ready.go:102] pod "coredns-5dd5756b68-hwrh2" in "kube-system" namespace has status "Ready":"False"
	I0116 02:02:07.606280  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:02:07.627676  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:02:07.846012  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:02:07.922063  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:02:08.105357  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:02:08.129331  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:02:08.340759  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:02:08.419113  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:02:08.605094  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:02:08.633770  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:02:08.841342  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:02:08.920473  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:02:09.105881  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:02:09.127026  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:02:09.342466  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:02:09.421620  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:02:09.605908  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:02:09.630030  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:02:09.845632  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:02:09.918242  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:02:09.991917  979198 pod_ready.go:102] pod "coredns-5dd5756b68-hwrh2" in "kube-system" namespace has status "Ready":"False"
	I0116 02:02:10.108009  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:02:10.142081  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:02:10.340825  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:02:10.422820  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:02:10.605355  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:02:10.626259  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:02:10.841001  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:02:10.919046  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:02:11.105183  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:02:11.127304  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:02:11.341152  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:02:11.418773  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:02:11.606037  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:02:11.627111  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:02:11.840779  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:02:11.918886  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:02:12.105221  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:02:12.127834  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:02:12.344680  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:02:12.418203  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:02:12.491249  979198 pod_ready.go:102] pod "coredns-5dd5756b68-hwrh2" in "kube-system" namespace has status "Ready":"False"
	I0116 02:02:12.605104  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:02:12.627153  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:02:12.840538  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:02:12.917957  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:02:13.105774  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:02:13.126821  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:02:13.341428  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:02:13.418021  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:02:13.605863  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:02:13.626447  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:02:13.842818  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:02:13.918448  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:02:14.105510  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:02:14.128306  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:02:14.342035  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:02:14.421979  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:02:14.493750  979198 pod_ready.go:102] pod "coredns-5dd5756b68-hwrh2" in "kube-system" namespace has status "Ready":"False"
	I0116 02:02:14.606167  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:02:14.626911  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:02:14.841677  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:02:14.932523  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:02:15.105981  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:02:15.126691  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:02:15.341269  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:02:15.435476  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:02:15.605440  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:02:15.627419  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:02:15.841271  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:02:15.919233  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:02:16.108006  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:02:16.127490  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:02:16.342389  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:02:16.420344  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:02:16.493793  979198 pod_ready.go:102] pod "coredns-5dd5756b68-hwrh2" in "kube-system" namespace has status "Ready":"False"
	I0116 02:02:16.605754  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:02:16.626993  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:02:16.840849  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:02:16.921715  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:02:17.105786  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:02:17.127335  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:02:17.340841  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:02:17.424181  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:02:17.606264  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:02:17.628345  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:02:17.841020  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:02:17.919044  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:02:18.105552  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:02:18.132666  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:02:18.343982  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:02:18.418633  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:02:18.606060  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:02:18.626637  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:02:18.841182  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:02:18.918588  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:02:18.991901  979198 pod_ready.go:102] pod "coredns-5dd5756b68-hwrh2" in "kube-system" namespace has status "Ready":"False"
	I0116 02:02:19.105526  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:02:19.126825  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:02:19.341116  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:02:19.418321  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:02:19.605323  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:02:19.627100  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:02:19.840037  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:02:19.919531  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:02:20.105279  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:02:20.126009  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:02:20.342159  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:02:20.419790  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:02:20.703858  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:02:20.703976  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:02:21.126563  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:02:21.129960  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:02:21.131376  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:02:21.144281  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:02:21.156043  979198 pod_ready.go:102] pod "coredns-5dd5756b68-hwrh2" in "kube-system" namespace has status "Ready":"False"
	I0116 02:02:21.340630  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:02:21.418680  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:02:21.491030  979198 pod_ready.go:92] pod "coredns-5dd5756b68-hwrh2" in "kube-system" namespace has status "Ready":"True"
	I0116 02:02:21.491060  979198 pod_ready.go:81] duration metric: took 34.00716145s waiting for pod "coredns-5dd5756b68-hwrh2" in "kube-system" namespace to be "Ready" ...
	I0116 02:02:21.491073  979198 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-321835" in "kube-system" namespace to be "Ready" ...
	I0116 02:02:21.496124  979198 pod_ready.go:92] pod "etcd-addons-321835" in "kube-system" namespace has status "Ready":"True"
	I0116 02:02:21.496151  979198 pod_ready.go:81] duration metric: took 5.068765ms waiting for pod "etcd-addons-321835" in "kube-system" namespace to be "Ready" ...
	I0116 02:02:21.496163  979198 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-321835" in "kube-system" namespace to be "Ready" ...
	I0116 02:02:21.501754  979198 pod_ready.go:92] pod "kube-apiserver-addons-321835" in "kube-system" namespace has status "Ready":"True"
	I0116 02:02:21.501777  979198 pod_ready.go:81] duration metric: took 5.604908ms waiting for pod "kube-apiserver-addons-321835" in "kube-system" namespace to be "Ready" ...
	I0116 02:02:21.501796  979198 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-321835" in "kube-system" namespace to be "Ready" ...
	I0116 02:02:21.513679  979198 pod_ready.go:92] pod "kube-controller-manager-addons-321835" in "kube-system" namespace has status "Ready":"True"
	I0116 02:02:21.513704  979198 pod_ready.go:81] duration metric: took 11.884052ms waiting for pod "kube-controller-manager-addons-321835" in "kube-system" namespace to be "Ready" ...
	I0116 02:02:21.513716  979198 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4jmxg" in "kube-system" namespace to be "Ready" ...
	I0116 02:02:21.520523  979198 pod_ready.go:92] pod "kube-proxy-4jmxg" in "kube-system" namespace has status "Ready":"True"
	I0116 02:02:21.520550  979198 pod_ready.go:81] duration metric: took 6.826235ms waiting for pod "kube-proxy-4jmxg" in "kube-system" namespace to be "Ready" ...
	I0116 02:02:21.520564  979198 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-321835" in "kube-system" namespace to be "Ready" ...
	I0116 02:02:21.604739  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:02:21.626817  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:02:21.841854  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:02:21.888876  979198 pod_ready.go:92] pod "kube-scheduler-addons-321835" in "kube-system" namespace has status "Ready":"True"
	I0116 02:02:21.888911  979198 pod_ready.go:81] duration metric: took 368.337206ms waiting for pod "kube-scheduler-addons-321835" in "kube-system" namespace to be "Ready" ...
	I0116 02:02:21.888927  979198 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-7c66d45ddc-296vl" in "kube-system" namespace to be "Ready" ...
	I0116 02:02:21.922753  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:02:22.104944  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:02:22.126982  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:02:22.340945  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:02:22.422943  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:02:22.609541  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:02:22.630240  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:02:22.841294  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:02:22.919543  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:02:23.107319  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:02:23.126816  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:02:23.341398  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:02:23.417986  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:02:23.605766  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:02:23.646736  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:02:23.841677  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:02:23.898058  979198 pod_ready.go:102] pod "metrics-server-7c66d45ddc-296vl" in "kube-system" namespace has status "Ready":"False"
	I0116 02:02:23.925871  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:02:24.105303  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:02:24.129092  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:02:24.343013  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:02:24.417248  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:02:24.605785  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:02:24.626607  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:02:24.846339  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:02:24.928026  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:02:25.109256  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:02:25.126085  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:02:25.343215  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:02:25.420182  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:02:25.630580  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:02:25.649385  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:02:25.841721  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:02:25.902495  979198 pod_ready.go:102] pod "metrics-server-7c66d45ddc-296vl" in "kube-system" namespace has status "Ready":"False"
	I0116 02:02:25.921027  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:02:26.150260  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:02:26.151950  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:02:26.342137  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:02:26.426240  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:02:26.605566  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:02:26.626132  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:02:26.841239  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:02:26.919482  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:02:27.106515  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:02:27.128350  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:02:27.346720  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:02:27.395882  979198 pod_ready.go:92] pod "metrics-server-7c66d45ddc-296vl" in "kube-system" namespace has status "Ready":"True"
	I0116 02:02:27.395911  979198 pod_ready.go:81] duration metric: took 5.506976907s waiting for pod "metrics-server-7c66d45ddc-296vl" in "kube-system" namespace to be "Ready" ...
	I0116 02:02:27.395922  979198 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-nvq58" in "kube-system" namespace to be "Ready" ...
	I0116 02:02:27.419088  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:02:27.605724  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:02:27.626424  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:02:27.841640  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:02:27.917938  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:02:28.106582  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:02:28.126591  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:02:28.341535  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:02:28.417692  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:02:28.609073  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:02:28.629310  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:02:28.841069  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:02:28.918381  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:02:29.105522  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:02:29.127541  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:02:29.340770  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:02:29.404440  979198 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-nvq58" in "kube-system" namespace has status "Ready":"False"
	I0116 02:02:29.418236  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:02:29.605215  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:02:29.632043  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:02:29.841548  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:02:29.918750  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:02:30.105757  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:02:30.126227  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:02:30.340933  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:02:30.418201  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:02:30.607540  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:02:30.626843  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:02:30.843562  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:02:30.918583  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:02:31.105133  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:02:31.127465  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:02:31.340866  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:02:31.418243  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:02:31.605009  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:02:31.629476  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:02:31.840979  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:02:31.902652  979198 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-nvq58" in "kube-system" namespace has status "Ready":"False"
	I0116 02:02:31.917847  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:02:32.104696  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:02:32.127408  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:02:32.341519  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:02:32.417629  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:02:32.885618  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:02:32.885857  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:02:32.889964  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:02:32.926107  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:02:33.105402  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:02:33.127332  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:02:33.341063  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:02:33.416926  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:02:33.605365  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:02:33.626507  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:02:33.841013  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:02:33.910281  979198 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-nvq58" in "kube-system" namespace has status "Ready":"False"
	I0116 02:02:33.918057  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:02:34.105174  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:02:34.126120  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:02:34.340220  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:02:34.418210  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:02:34.604713  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:02:34.627241  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:02:34.841929  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:02:34.918076  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:02:35.105036  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:02:35.126598  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:02:35.347741  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:02:35.421472  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:02:35.611624  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:02:35.627144  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:02:35.841080  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:02:35.917901  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:02:36.106841  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:02:36.128163  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:02:36.341101  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:02:36.403192  979198 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-nvq58" in "kube-system" namespace has status "Ready":"False"
	I0116 02:02:36.419298  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:02:36.606596  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:02:36.631010  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:02:36.841890  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:02:36.917643  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:02:37.106570  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:02:37.133748  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:02:37.340889  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:02:37.427883  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:02:38.053592  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:02:38.093011  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:02:38.093058  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:02:38.093083  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:02:38.108218  979198 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-nvq58" in "kube-system" namespace has status "Ready":"True"
	I0116 02:02:38.108265  979198 pod_ready.go:81] duration metric: took 10.712331219s waiting for pod "nvidia-device-plugin-daemonset-nvq58" in "kube-system" namespace to be "Ready" ...
	I0116 02:02:38.108292  979198 pod_ready.go:38] duration metric: took 50.837719071s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 02:02:38.108321  979198 api_server.go:52] waiting for apiserver process to appear ...
	I0116 02:02:38.108401  979198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 02:02:38.111802  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:02:38.128053  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:02:38.189945  979198 api_server.go:72] duration metric: took 58.526568771s to wait for apiserver process to appear ...
	I0116 02:02:38.189982  979198 api_server.go:88] waiting for apiserver healthz status ...
	I0116 02:02:38.190011  979198 api_server.go:253] Checking apiserver healthz at https://192.168.39.11:8443/healthz ...
	I0116 02:02:38.195039  979198 api_server.go:279] https://192.168.39.11:8443/healthz returned 200:
	ok
	I0116 02:02:38.196354  979198 api_server.go:141] control plane version: v1.28.4
	I0116 02:02:38.196379  979198 api_server.go:131] duration metric: took 6.390155ms to wait for apiserver health ...
	I0116 02:02:38.196388  979198 system_pods.go:43] waiting for kube-system pods to appear ...
	I0116 02:02:38.207717  979198 system_pods.go:59] 18 kube-system pods found
	I0116 02:02:38.207751  979198 system_pods.go:61] "coredns-5dd5756b68-hwrh2" [e53eefed-07be-4b83-95cd-b7784738a353] Running
	I0116 02:02:38.207756  979198 system_pods.go:61] "csi-hostpath-attacher-0" [cd5ad0a5-56a1-4e7a-88cc-fd5974cb13f8] Running
	I0116 02:02:38.207761  979198 system_pods.go:61] "csi-hostpath-resizer-0" [3d07a75e-7b41-43ea-9cd3-983440c7ea7c] Running
	I0116 02:02:38.207770  979198 system_pods.go:61] "csi-hostpathplugin-9dkh7" [e97162d6-3423-4069-8aec-d551ad67a0f2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0116 02:02:38.207775  979198 system_pods.go:61] "etcd-addons-321835" [ac085342-d433-43e0-90b1-0c0ded554bae] Running
	I0116 02:02:38.207780  979198 system_pods.go:61] "kube-apiserver-addons-321835" [a494f63e-1a39-41d6-bcd1-d3727f82bd68] Running
	I0116 02:02:38.207784  979198 system_pods.go:61] "kube-controller-manager-addons-321835" [a9e62428-9609-4f55-b875-68fa03ca6961] Running
	I0116 02:02:38.207790  979198 system_pods.go:61] "kube-ingress-dns-minikube" [5f31a962-46bd-4b12-8a29-782496d107eb] Running
	I0116 02:02:38.207794  979198 system_pods.go:61] "kube-proxy-4jmxg" [8677cef8-204f-483d-8ac5-b0d2dc9c4080] Running
	I0116 02:02:38.207798  979198 system_pods.go:61] "kube-scheduler-addons-321835" [dc0a18c6-6857-4911-80dd-a18294096a3b] Running
	I0116 02:02:38.207802  979198 system_pods.go:61] "metrics-server-7c66d45ddc-296vl" [ebe68b5a-8342-40f5-9ac6-017909a26e0e] Running
	I0116 02:02:38.207806  979198 system_pods.go:61] "nvidia-device-plugin-daemonset-nvq58" [c9de0950-d70c-441e-adb4-e56150f45bb8] Running
	I0116 02:02:38.207812  979198 system_pods.go:61] "registry-j2d5p" [3dd2c768-f1a1-4679-82a2-ad8ff7e9af26] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0116 02:02:38.207821  979198 system_pods.go:61] "registry-proxy-q7nd5" [096ccac2-854d-42c9-b6c0-a77e42588aeb] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0116 02:02:38.207831  979198 system_pods.go:61] "snapshot-controller-58dbcc7b99-647lh" [1ec8e705-d3fd-4846-9453-cfaae08571d5] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0116 02:02:38.207841  979198 system_pods.go:61] "snapshot-controller-58dbcc7b99-lm65f" [b09eca87-9b26-4105-8096-43fe466726af] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0116 02:02:38.207845  979198 system_pods.go:61] "storage-provisioner" [32902683-f363-41df-8423-bea7577187a3] Running
	I0116 02:02:38.207850  979198 system_pods.go:61] "tiller-deploy-7b677967b9-r89d7" [5ca2a0e1-24c1-4197-9fef-2a11b3731eb4] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0116 02:02:38.207860  979198 system_pods.go:74] duration metric: took 11.466395ms to wait for pod list to return data ...
	I0116 02:02:38.207871  979198 default_sa.go:34] waiting for default service account to be created ...
	I0116 02:02:38.211635  979198 default_sa.go:45] found service account: "default"
	I0116 02:02:38.211667  979198 default_sa.go:55] duration metric: took 3.78723ms for default service account to be created ...
	I0116 02:02:38.211677  979198 system_pods.go:116] waiting for k8s-apps to be running ...
	I0116 02:02:38.224733  979198 system_pods.go:86] 18 kube-system pods found
	I0116 02:02:38.224775  979198 system_pods.go:89] "coredns-5dd5756b68-hwrh2" [e53eefed-07be-4b83-95cd-b7784738a353] Running
	I0116 02:02:38.224781  979198 system_pods.go:89] "csi-hostpath-attacher-0" [cd5ad0a5-56a1-4e7a-88cc-fd5974cb13f8] Running
	I0116 02:02:38.224786  979198 system_pods.go:89] "csi-hostpath-resizer-0" [3d07a75e-7b41-43ea-9cd3-983440c7ea7c] Running
	I0116 02:02:38.224793  979198 system_pods.go:89] "csi-hostpathplugin-9dkh7" [e97162d6-3423-4069-8aec-d551ad67a0f2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0116 02:02:38.224799  979198 system_pods.go:89] "etcd-addons-321835" [ac085342-d433-43e0-90b1-0c0ded554bae] Running
	I0116 02:02:38.224805  979198 system_pods.go:89] "kube-apiserver-addons-321835" [a494f63e-1a39-41d6-bcd1-d3727f82bd68] Running
	I0116 02:02:38.224812  979198 system_pods.go:89] "kube-controller-manager-addons-321835" [a9e62428-9609-4f55-b875-68fa03ca6961] Running
	I0116 02:02:38.224817  979198 system_pods.go:89] "kube-ingress-dns-minikube" [5f31a962-46bd-4b12-8a29-782496d107eb] Running
	I0116 02:02:38.224825  979198 system_pods.go:89] "kube-proxy-4jmxg" [8677cef8-204f-483d-8ac5-b0d2dc9c4080] Running
	I0116 02:02:38.224829  979198 system_pods.go:89] "kube-scheduler-addons-321835" [dc0a18c6-6857-4911-80dd-a18294096a3b] Running
	I0116 02:02:38.224836  979198 system_pods.go:89] "metrics-server-7c66d45ddc-296vl" [ebe68b5a-8342-40f5-9ac6-017909a26e0e] Running
	I0116 02:02:38.224840  979198 system_pods.go:89] "nvidia-device-plugin-daemonset-nvq58" [c9de0950-d70c-441e-adb4-e56150f45bb8] Running
	I0116 02:02:38.224848  979198 system_pods.go:89] "registry-j2d5p" [3dd2c768-f1a1-4679-82a2-ad8ff7e9af26] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0116 02:02:38.224854  979198 system_pods.go:89] "registry-proxy-q7nd5" [096ccac2-854d-42c9-b6c0-a77e42588aeb] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0116 02:02:38.224864  979198 system_pods.go:89] "snapshot-controller-58dbcc7b99-647lh" [1ec8e705-d3fd-4846-9453-cfaae08571d5] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0116 02:02:38.224873  979198 system_pods.go:89] "snapshot-controller-58dbcc7b99-lm65f" [b09eca87-9b26-4105-8096-43fe466726af] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0116 02:02:38.224879  979198 system_pods.go:89] "storage-provisioner" [32902683-f363-41df-8423-bea7577187a3] Running
	I0116 02:02:38.224884  979198 system_pods.go:89] "tiller-deploy-7b677967b9-r89d7" [5ca2a0e1-24c1-4197-9fef-2a11b3731eb4] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0116 02:02:38.224895  979198 system_pods.go:126] duration metric: took 13.212434ms to wait for k8s-apps to be running ...
	I0116 02:02:38.224905  979198 system_svc.go:44] waiting for kubelet service to be running ....
	I0116 02:02:38.224956  979198 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 02:02:38.269693  979198 system_svc.go:56] duration metric: took 44.7725ms WaitForService to wait for kubelet.
	I0116 02:02:38.269732  979198 kubeadm.go:581] duration metric: took 58.606365064s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0116 02:02:38.269762  979198 node_conditions.go:102] verifying NodePressure condition ...
	I0116 02:02:38.273091  979198 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 02:02:38.273126  979198 node_conditions.go:123] node cpu capacity is 2
	I0116 02:02:38.273139  979198 node_conditions.go:105] duration metric: took 3.371953ms to run NodePressure ...
	I0116 02:02:38.273152  979198 start.go:228] waiting for startup goroutines ...
	I0116 02:02:38.340160  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:02:38.421604  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:02:38.606056  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:02:38.634123  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:02:38.841970  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:02:38.919073  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:02:39.106047  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:02:39.126164  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:02:39.340689  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:02:39.426404  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:02:39.605792  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:02:39.628124  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:02:39.841069  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:02:39.923044  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:02:40.108050  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:02:40.130502  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:02:40.340690  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:02:40.418868  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:02:40.605288  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:02:40.626017  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:02:40.841529  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:02:40.919717  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:02:41.105669  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:02:41.126962  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:02:41.341048  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:02:41.419721  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:02:41.604885  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:02:41.627085  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:02:41.840406  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:02:41.923881  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:02:42.105482  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:02:42.127367  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:02:42.614072  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:02:42.615691  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:02:42.616991  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:02:42.628639  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:02:42.841221  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:02:42.922115  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:02:43.105621  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:02:43.127167  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:02:43.340844  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:02:43.418665  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:02:43.605631  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:02:43.627155  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:02:43.879432  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:02:43.925040  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:02:44.113581  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:02:44.132183  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:02:44.341161  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:02:44.431017  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:02:44.608458  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:02:44.628144  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:02:44.841043  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:02:44.918099  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:02:45.115867  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:02:45.127794  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:02:45.340436  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:02:45.417664  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:02:45.604880  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:02:45.627147  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:02:45.840337  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:02:45.920071  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:02:46.105389  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:02:46.125929  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:02:46.342010  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:02:46.420077  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:02:46.605306  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:02:46.626429  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:02:46.840776  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:02:46.918423  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:02:47.122542  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:02:47.151331  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:02:47.341403  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:02:47.418724  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:02:47.605780  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:02:47.626840  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:02:47.841596  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:02:47.917840  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:02:48.111489  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:02:48.132858  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:02:48.342862  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:02:48.418316  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:02:48.609758  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:02:48.626981  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:02:48.840755  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:02:48.918746  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:02:49.105716  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:02:49.125798  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:02:49.340910  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:02:49.419631  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:02:49.611336  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:02:49.626793  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:02:49.842800  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:02:49.922456  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:02:50.106781  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:02:50.131519  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:02:50.345375  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:02:50.420175  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:02:50.610367  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:02:50.639288  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:02:50.840237  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:02:50.918520  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:02:51.108774  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:02:51.132106  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:02:51.341185  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:02:51.418747  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:02:51.604967  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:02:51.627063  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:02:51.840706  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:02:51.918893  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:02:52.107778  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:02:52.137023  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:02:52.349512  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:02:52.428651  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:02:52.611520  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:02:52.641725  979198 kapi.go:107] duration metric: took 1m5.520788963s to wait for kubernetes.io/minikube-addons=registry ...
	I0116 02:02:52.849842  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:02:52.925453  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:02:53.104779  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:02:53.358065  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:02:53.455619  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:02:53.607571  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:02:53.840950  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:02:53.938689  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:02:54.107023  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:02:54.345776  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:02:54.418107  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:02:54.604637  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:02:54.848559  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:02:54.917921  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:02:55.105002  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:02:55.342774  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:02:55.423226  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:02:55.605952  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:02:55.842238  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:02:55.919815  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:02:56.108098  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:02:56.346418  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:02:56.427969  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:02:56.607494  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:02:56.846506  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:02:56.931883  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:02:57.105932  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:02:57.341585  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:02:57.429387  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:02:57.606589  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:02:57.841991  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:02:57.917907  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:02:58.115887  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:02:58.341188  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:02:58.423904  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:02:58.607334  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:02:58.841103  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:02:58.918187  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:02:59.105187  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:02:59.372114  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:02:59.420222  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:02:59.605368  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:02:59.841160  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:02:59.927771  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:03:00.106519  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:03:00.341054  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:03:00.417791  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:03:00.604805  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:03:00.840567  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:03:00.920011  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:03:01.105056  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:03:01.341022  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:03:01.418204  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:03:01.605517  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:03:01.846044  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:03:01.918238  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:03:02.105515  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:03:02.343537  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:03:02.440002  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:03:02.610540  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:03:02.855321  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:03:03.262836  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:03:03.265612  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:03:03.340684  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:03:03.419084  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:03:03.605442  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:03:03.841290  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:03:03.919552  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:03:04.105455  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:03:04.351993  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:03:04.426032  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:03:04.633067  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:03:04.841519  979198 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:03:04.919190  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:03:05.104748  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:03:05.341713  979198 kapi.go:107] duration metric: took 1m16.507859615s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0116 02:03:05.423255  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:03:05.605404  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:03:05.919297  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:03:06.121902  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:03:06.423523  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:03:06.615805  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:03:06.918506  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:03:07.106406  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:03:07.433256  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:03:07.607308  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:03:07.918283  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:03:08.105735  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:03:08.418747  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:03:08.606161  979198 kapi.go:107] duration metric: took 1m16.505175377s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0116 02:03:08.608520  979198 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-321835 cluster.
	I0116 02:03:08.611199  979198 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0116 02:03:08.612876  979198 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0116 02:03:08.919399  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:03:09.431716  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:03:09.918535  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:03:10.417726  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:03:10.919126  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:03:11.418102  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:03:11.918186  979198 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:03:12.418716  979198 kapi.go:107] duration metric: took 1m22.506866525s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0116 02:03:12.420778  979198 out.go:177] * Enabled addons: cloud-spanner, ingress-dns, nvidia-device-plugin, storage-provisioner-rancher, helm-tiller, storage-provisioner, default-storageclass, metrics-server, yakd, inspektor-gadget, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0116 02:03:12.422381  979198 addons.go:505] enable addons completed in 1m33.462636074s: enabled=[cloud-spanner ingress-dns nvidia-device-plugin storage-provisioner-rancher helm-tiller storage-provisioner default-storageclass metrics-server yakd inspektor-gadget volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0116 02:03:12.422431  979198 start.go:233] waiting for cluster config update ...
	I0116 02:03:12.422451  979198 start.go:242] writing updated cluster config ...
	I0116 02:03:12.422819  979198 ssh_runner.go:195] Run: rm -f paused
	I0116 02:03:12.480183  979198 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I0116 02:03:12.482246  979198 out.go:177] * Done! kubectl is now configured to use "addons-321835" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	-- Journal begins at Tue 2024-01-16 02:00:52 UTC, ends at Tue 2024-01-16 02:06:12 UTC. --
	Jan 16 02:06:12 addons-321835 crio[707]: time="2024-01-16 02:06:12.481103141Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=8b423a39-376d-452a-9b58-8429b4a41754 name=/runtime.v1.RuntimeService/Version
	Jan 16 02:06:12 addons-321835 crio[707]: time="2024-01-16 02:06:12.483600687Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=e67134ab-b4c0-4446-9d56-cf3822b6a8cf name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 02:06:12 addons-321835 crio[707]: time="2024-01-16 02:06:12.484870278Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705370772484849431,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:575388,},InodesUsed:&UInt64Value{Value:233,},},},}" file="go-grpc-middleware/chain.go:25" id=e67134ab-b4c0-4446-9d56-cf3822b6a8cf name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 02:06:12 addons-321835 crio[707]: time="2024-01-16 02:06:12.485582057Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=dd1dfb56-9f3a-4d2c-afa5-172b5536c136 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 02:06:12 addons-321835 crio[707]: time="2024-01-16 02:06:12.485638242Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=dd1dfb56-9f3a-4d2c-afa5-172b5536c136 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 02:06:12 addons-321835 crio[707]: time="2024-01-16 02:06:12.486017670Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:75e4a22d250a807405ba7bc63298bd143bd91b59aac9827c7e544cb170274f36,PodSandboxId:5f939ccb5c70997e07cfa682926bf724c52394b68ce784dd494d6db607169b88,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1705370764008685646,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-2ct5n,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 82c2a0b6-fe05-4fd7-b72a-14ff5a2b5476,},Annotations:map[string]string{io.kubernetes.container.hash: f32f9bee,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:716e6750c2b6733a18807dc224c232d203ea549f8908b2ae2f142d8fe00a140b,PodSandboxId:6b329095dd38c88a82e0a3026e6bd35ddb86bf6c173cba69c0c9806c1e62839d,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686,State:CONTAINER_RUNNING,CreatedAt:1705370625908289528,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d2c94fc6-73dd-4d9a-97ca-3e782e24db68,},Annotations:map[string]string{io.kubernet
es.container.hash: 5589d21e,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d419fdc07bd782ef8df18acf1e67402d3d37dc20c603ee652445f21b45ff64a,PodSandboxId:40b822b69da4903e0ca042ba44f9cb31a243b995f1b9b845a72de822b37c292e,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:3c6da859a989f285b2fd2ac2f4763d1884d54a51e4405301e5324e0b2b70bd67,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:3c6da859a989f285b2fd2ac2f4763d1884d54a51e4405301e5324e0b2b70bd67,State:CONTAINER_RUNNING,CreatedAt:1705370613093833316,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7ddfbb94ff-6q6k8,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.u
id: 0421c569-8afa-4fcf-9eaf-52b494eb32b6,},Annotations:map[string]string{io.kubernetes.container.hash: 4f7de0e7,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb275e11af419d4003871605cf03669ff5d6c0e860eda1fe49d846e07abaecee,PodSandboxId:d04ccd9478f35f2e0ae42159a7f8e2495eff18c25b075a514d1f460e12b8fe60,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1705370587407463076,Labels:map[string]string{io.kubernetes.container.name
: gcp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-98xpz,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 3c91513c-73ff-4529-83c2-c9e673c8895c,},Annotations:map[string]string{io.kubernetes.container.hash: a2478fac,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9161d7693da758ede3cfc95e4dd427ca0ff145bf4ee729ce45536d3e7dabc161,PodSandboxId:f66a93c787d44f68bb9499ee63062b140585dac0bebe5b5a5cf814ca82d7ecaa,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:1ebff0f9671bc015dc340b12c5bf6f3dbda7d0a8b5332bd095f21bd52e1b30fb,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:17053705
82090615243,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-674tw,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: f2c0f09f-382a-4500-bb55-2508391570f2,},Annotations:map[string]string{io.kubernetes.container.hash: 85d2eb0c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:934d3fcbbaa6080d1d96c50654e260db8bf17575334282191a2d2a5890f26ebb,PodSandboxId:780ab56487225152d2b53d19f5b4c49cecc0faa56fa30009ede18663d31e3e2c,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c59
65b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1705370570288756575,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-g7dqq,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 9e996e55-f0e4-4157-8a5c-92789c753c18,},Annotations:map[string]string{io.kubernetes.container.hash: 2fe4ea98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ca065a8248d23682515a3eaae7869054410a53e52e95d0cf09e99501ccb8ec0,PodSandboxId:4e5b31150872114be1d538871eb18b3b0278d3af37520e999fc15a8109ffdd77,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c44
1c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705370555238311887,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32902683-f363-41df-8423-bea7577187a3,},Annotations:map[string]string{io.kubernetes.container.hash: da08b2e6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2023ec465ebbecacba60acb2a92941abd467064b72bff41ccbb0725f8534e8a5,PodSandboxId:4e5b31150872114be1d538871eb18b3b0278d3af37520e999fc15a8109ffdd77,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441
c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1705370520435491653,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32902683-f363-41df-8423-bea7577187a3,},Annotations:map[string]string{io.kubernetes.container.hash: da08b2e6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dfc947d3eb5cdf0868e3f71df7bd32bdb49426243d717fb56d2b328551507d5e,PodSandboxId:b130eed8c0dc582474c0c87d4bbf61a9fccd614e1673b032113ad8217798ca88,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,Stat
e:CONTAINER_RUNNING,CreatedAt:1705370519801383287,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4jmxg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8677cef8-204f-483d-8ac5-b0d2dc9c4080,},Annotations:map[string]string{io.kubernetes.container.hash: 64685c76,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5fa7609a97a54165bced16a86238cc86c4adad2a9e04cb18ef200bb958d77542,PodSandboxId:3ed54975a39683ac8c6da0c6c54b39f4ffc02d9ed10fa63dfb395c13ffcb7af6,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},},ImageRef:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,State:CONTAINER_R
UNNING,CreatedAt:1705370519907708033,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-9947fc6bf-r5xnq,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 4be158dc-651a-4047-9116-213e19d2a128,},Annotations:map[string]string{io.kubernetes.container.hash: d7bb8097,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d5667db916937ba754f1891989d15fd8046abbbeb8e8fa0bae0ba754ac21a0e,PodSandboxId:08806033f0d0976a9352436fedc879b95354aad35042de981a276f5223738263,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06
651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1705370504692768615,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-hwrh2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e53eefed-07be-4b83-95cd-b7784738a353,},Annotations:map[string]string{io.kubernetes.container.hash: 2346e277,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb314278db69433566e37009477d3b952f8a7df27437a03cfd2a93b308794635,PodSandboxId:439a6dbec9b3c0d8cb7a2aeb527347726372c0ddc3447405022bbd7359a4c4a0,Metadata:&ContainerMetadata{Name
:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1705370479594566320,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-321835,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65399b7be2a104cb06dd5681d16f8c43,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65aae40bc102d501438e317e8d6caa2371e8fa836ca7cd248bb4041e7b30f3d2,PodSandboxId:a4c5aba1652de3809cdf59c095f3648c51ed86576e2afe45d7e89ebde1c78835,Metadata:&ContainerMetadata{Name:etcd,Attempt:0
,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1705370479533264995,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-321835,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 528034ccbe30140109ab63b5d2a10907,},Annotations:map[string]string{io.kubernetes.container.hash: 1dee1bcd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be373cb72d98ba08440a9740ef0b20c76f0f5fefdca2c1ac76d5e9d1dabaa38e,PodSandboxId:dd58e6d3af9fca0fb9cbe75eb932174222c52e7d8f89d1ce8c681a809b9298fb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725
e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1705370479361005741,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-321835,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f27acc5ceef5a184a479396c53e4712c,},Annotations:map[string]string{io.kubernetes.container.hash: c4a2f536,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e64d60eb0faf02f3d4911737a65a20af6648b667943f3e7fbd534db9ac8f19b,PodSandboxId:c1f4b3d486cb6a09797e08b1de8d013b7964fdbd0d1fab8d1ac1633436fa2062,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e718
8be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1705370479303474083,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-321835,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22e86ca4f6d2a172d1015dc4bada229c,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=dd1dfb56-9f3a-4d2c-afa5-172b5536c136 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 02:06:12 addons-321835 crio[707]: time="2024-01-16 02:06:12.522608455Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=d98e530b-48b4-4273-8225-6d0ae35a83c8 name=/runtime.v1.RuntimeService/Version
	Jan 16 02:06:12 addons-321835 crio[707]: time="2024-01-16 02:06:12.522679626Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=d98e530b-48b4-4273-8225-6d0ae35a83c8 name=/runtime.v1.RuntimeService/Version
	Jan 16 02:06:12 addons-321835 crio[707]: time="2024-01-16 02:06:12.524421153Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=c7bf0bc9-e400-463e-9ae1-52876bb12916 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 02:06:12 addons-321835 crio[707]: time="2024-01-16 02:06:12.526606978Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705370772526580687,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:575388,},InodesUsed:&UInt64Value{Value:233,},},},}" file="go-grpc-middleware/chain.go:25" id=c7bf0bc9-e400-463e-9ae1-52876bb12916 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 02:06:12 addons-321835 crio[707]: time="2024-01-16 02:06:12.527634070Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=37646038-b86b-44f3-bda8-0df5726deb63 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 02:06:12 addons-321835 crio[707]: time="2024-01-16 02:06:12.527767944Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=37646038-b86b-44f3-bda8-0df5726deb63 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 02:06:12 addons-321835 crio[707]: time="2024-01-16 02:06:12.528437507Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:75e4a22d250a807405ba7bc63298bd143bd91b59aac9827c7e544cb170274f36,PodSandboxId:5f939ccb5c70997e07cfa682926bf724c52394b68ce784dd494d6db607169b88,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1705370764008685646,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-2ct5n,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 82c2a0b6-fe05-4fd7-b72a-14ff5a2b5476,},Annotations:map[string]string{io.kubernetes.container.hash: f32f9bee,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:716e6750c2b6733a18807dc224c232d203ea549f8908b2ae2f142d8fe00a140b,PodSandboxId:6b329095dd38c88a82e0a3026e6bd35ddb86bf6c173cba69c0c9806c1e62839d,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686,State:CONTAINER_RUNNING,CreatedAt:1705370625908289528,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d2c94fc6-73dd-4d9a-97ca-3e782e24db68,},Annotations:map[string]string{io.kubernet
es.container.hash: 5589d21e,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d419fdc07bd782ef8df18acf1e67402d3d37dc20c603ee652445f21b45ff64a,PodSandboxId:40b822b69da4903e0ca042ba44f9cb31a243b995f1b9b845a72de822b37c292e,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:3c6da859a989f285b2fd2ac2f4763d1884d54a51e4405301e5324e0b2b70bd67,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:3c6da859a989f285b2fd2ac2f4763d1884d54a51e4405301e5324e0b2b70bd67,State:CONTAINER_RUNNING,CreatedAt:1705370613093833316,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7ddfbb94ff-6q6k8,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.u
id: 0421c569-8afa-4fcf-9eaf-52b494eb32b6,},Annotations:map[string]string{io.kubernetes.container.hash: 4f7de0e7,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb275e11af419d4003871605cf03669ff5d6c0e860eda1fe49d846e07abaecee,PodSandboxId:d04ccd9478f35f2e0ae42159a7f8e2495eff18c25b075a514d1f460e12b8fe60,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1705370587407463076,Labels:map[string]string{io.kubernetes.container.name
: gcp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-98xpz,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 3c91513c-73ff-4529-83c2-c9e673c8895c,},Annotations:map[string]string{io.kubernetes.container.hash: a2478fac,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9161d7693da758ede3cfc95e4dd427ca0ff145bf4ee729ce45536d3e7dabc161,PodSandboxId:f66a93c787d44f68bb9499ee63062b140585dac0bebe5b5a5cf814ca82d7ecaa,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:1ebff0f9671bc015dc340b12c5bf6f3dbda7d0a8b5332bd095f21bd52e1b30fb,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:17053705
82090615243,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-674tw,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: f2c0f09f-382a-4500-bb55-2508391570f2,},Annotations:map[string]string{io.kubernetes.container.hash: 85d2eb0c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:934d3fcbbaa6080d1d96c50654e260db8bf17575334282191a2d2a5890f26ebb,PodSandboxId:780ab56487225152d2b53d19f5b4c49cecc0faa56fa30009ede18663d31e3e2c,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c59
65b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1705370570288756575,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-g7dqq,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 9e996e55-f0e4-4157-8a5c-92789c753c18,},Annotations:map[string]string{io.kubernetes.container.hash: 2fe4ea98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ca065a8248d23682515a3eaae7869054410a53e52e95d0cf09e99501ccb8ec0,PodSandboxId:4e5b31150872114be1d538871eb18b3b0278d3af37520e999fc15a8109ffdd77,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c44
1c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705370555238311887,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32902683-f363-41df-8423-bea7577187a3,},Annotations:map[string]string{io.kubernetes.container.hash: da08b2e6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2023ec465ebbecacba60acb2a92941abd467064b72bff41ccbb0725f8534e8a5,PodSandboxId:4e5b31150872114be1d538871eb18b3b0278d3af37520e999fc15a8109ffdd77,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441
c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1705370520435491653,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32902683-f363-41df-8423-bea7577187a3,},Annotations:map[string]string{io.kubernetes.container.hash: da08b2e6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dfc947d3eb5cdf0868e3f71df7bd32bdb49426243d717fb56d2b328551507d5e,PodSandboxId:b130eed8c0dc582474c0c87d4bbf61a9fccd614e1673b032113ad8217798ca88,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,Stat
e:CONTAINER_RUNNING,CreatedAt:1705370519801383287,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4jmxg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8677cef8-204f-483d-8ac5-b0d2dc9c4080,},Annotations:map[string]string{io.kubernetes.container.hash: 64685c76,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5fa7609a97a54165bced16a86238cc86c4adad2a9e04cb18ef200bb958d77542,PodSandboxId:3ed54975a39683ac8c6da0c6c54b39f4ffc02d9ed10fa63dfb395c13ffcb7af6,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},},ImageRef:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,State:CONTAINER_R
UNNING,CreatedAt:1705370519907708033,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-9947fc6bf-r5xnq,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 4be158dc-651a-4047-9116-213e19d2a128,},Annotations:map[string]string{io.kubernetes.container.hash: d7bb8097,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d5667db916937ba754f1891989d15fd8046abbbeb8e8fa0bae0ba754ac21a0e,PodSandboxId:08806033f0d0976a9352436fedc879b95354aad35042de981a276f5223738263,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06
651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1705370504692768615,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-hwrh2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e53eefed-07be-4b83-95cd-b7784738a353,},Annotations:map[string]string{io.kubernetes.container.hash: 2346e277,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb314278db69433566e37009477d3b952f8a7df27437a03cfd2a93b308794635,PodSandboxId:439a6dbec9b3c0d8cb7a2aeb527347726372c0ddc3447405022bbd7359a4c4a0,Metadata:&ContainerMetadata{Name
:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1705370479594566320,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-321835,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65399b7be2a104cb06dd5681d16f8c43,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65aae40bc102d501438e317e8d6caa2371e8fa836ca7cd248bb4041e7b30f3d2,PodSandboxId:a4c5aba1652de3809cdf59c095f3648c51ed86576e2afe45d7e89ebde1c78835,Metadata:&ContainerMetadata{Name:etcd,Attempt:0
,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1705370479533264995,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-321835,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 528034ccbe30140109ab63b5d2a10907,},Annotations:map[string]string{io.kubernetes.container.hash: 1dee1bcd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be373cb72d98ba08440a9740ef0b20c76f0f5fefdca2c1ac76d5e9d1dabaa38e,PodSandboxId:dd58e6d3af9fca0fb9cbe75eb932174222c52e7d8f89d1ce8c681a809b9298fb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725
e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1705370479361005741,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-321835,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f27acc5ceef5a184a479396c53e4712c,},Annotations:map[string]string{io.kubernetes.container.hash: c4a2f536,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e64d60eb0faf02f3d4911737a65a20af6648b667943f3e7fbd534db9ac8f19b,PodSandboxId:c1f4b3d486cb6a09797e08b1de8d013b7964fdbd0d1fab8d1ac1633436fa2062,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e718
8be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1705370479303474083,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-321835,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22e86ca4f6d2a172d1015dc4bada229c,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=37646038-b86b-44f3-bda8-0df5726deb63 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 02:06:12 addons-321835 crio[707]: time="2024-01-16 02:06:12.571584190Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=3849766e-f963-494a-8554-8096110682ef name=/runtime.v1.RuntimeService/Version
	Jan 16 02:06:12 addons-321835 crio[707]: time="2024-01-16 02:06:12.571653900Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=3849766e-f963-494a-8554-8096110682ef name=/runtime.v1.RuntimeService/Version
	Jan 16 02:06:12 addons-321835 crio[707]: time="2024-01-16 02:06:12.572681143Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=504057d1-f464-4705-91c9-febe92eed901 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 02:06:12 addons-321835 crio[707]: time="2024-01-16 02:06:12.574583205Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705370772574516046,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:575388,},InodesUsed:&UInt64Value{Value:233,},},},}" file="go-grpc-middleware/chain.go:25" id=504057d1-f464-4705-91c9-febe92eed901 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 02:06:12 addons-321835 crio[707]: time="2024-01-16 02:06:12.575434137Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=a9efac2b-fd34-41bd-bfc5-571d0f1df4cb name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 02:06:12 addons-321835 crio[707]: time="2024-01-16 02:06:12.575516556Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=a9efac2b-fd34-41bd-bfc5-571d0f1df4cb name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 02:06:12 addons-321835 crio[707]: time="2024-01-16 02:06:12.575823864Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:75e4a22d250a807405ba7bc63298bd143bd91b59aac9827c7e544cb170274f36,PodSandboxId:5f939ccb5c70997e07cfa682926bf724c52394b68ce784dd494d6db607169b88,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1705370764008685646,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-2ct5n,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 82c2a0b6-fe05-4fd7-b72a-14ff5a2b5476,},Annotations:map[string]string{io.kubernetes.container.hash: f32f9bee,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:716e6750c2b6733a18807dc224c232d203ea549f8908b2ae2f142d8fe00a140b,PodSandboxId:6b329095dd38c88a82e0a3026e6bd35ddb86bf6c173cba69c0c9806c1e62839d,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686,State:CONTAINER_RUNNING,CreatedAt:1705370625908289528,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d2c94fc6-73dd-4d9a-97ca-3e782e24db68,},Annotations:map[string]string{io.kubernet
es.container.hash: 5589d21e,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d419fdc07bd782ef8df18acf1e67402d3d37dc20c603ee652445f21b45ff64a,PodSandboxId:40b822b69da4903e0ca042ba44f9cb31a243b995f1b9b845a72de822b37c292e,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:3c6da859a989f285b2fd2ac2f4763d1884d54a51e4405301e5324e0b2b70bd67,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:3c6da859a989f285b2fd2ac2f4763d1884d54a51e4405301e5324e0b2b70bd67,State:CONTAINER_RUNNING,CreatedAt:1705370613093833316,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7ddfbb94ff-6q6k8,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.u
id: 0421c569-8afa-4fcf-9eaf-52b494eb32b6,},Annotations:map[string]string{io.kubernetes.container.hash: 4f7de0e7,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb275e11af419d4003871605cf03669ff5d6c0e860eda1fe49d846e07abaecee,PodSandboxId:d04ccd9478f35f2e0ae42159a7f8e2495eff18c25b075a514d1f460e12b8fe60,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1705370587407463076,Labels:map[string]string{io.kubernetes.container.name
: gcp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-98xpz,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 3c91513c-73ff-4529-83c2-c9e673c8895c,},Annotations:map[string]string{io.kubernetes.container.hash: a2478fac,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9161d7693da758ede3cfc95e4dd427ca0ff145bf4ee729ce45536d3e7dabc161,PodSandboxId:f66a93c787d44f68bb9499ee63062b140585dac0bebe5b5a5cf814ca82d7ecaa,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:1ebff0f9671bc015dc340b12c5bf6f3dbda7d0a8b5332bd095f21bd52e1b30fb,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:17053705
82090615243,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-674tw,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: f2c0f09f-382a-4500-bb55-2508391570f2,},Annotations:map[string]string{io.kubernetes.container.hash: 85d2eb0c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:934d3fcbbaa6080d1d96c50654e260db8bf17575334282191a2d2a5890f26ebb,PodSandboxId:780ab56487225152d2b53d19f5b4c49cecc0faa56fa30009ede18663d31e3e2c,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c59
65b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1705370570288756575,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-g7dqq,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 9e996e55-f0e4-4157-8a5c-92789c753c18,},Annotations:map[string]string{io.kubernetes.container.hash: 2fe4ea98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ca065a8248d23682515a3eaae7869054410a53e52e95d0cf09e99501ccb8ec0,PodSandboxId:4e5b31150872114be1d538871eb18b3b0278d3af37520e999fc15a8109ffdd77,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c44
1c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705370555238311887,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32902683-f363-41df-8423-bea7577187a3,},Annotations:map[string]string{io.kubernetes.container.hash: da08b2e6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2023ec465ebbecacba60acb2a92941abd467064b72bff41ccbb0725f8534e8a5,PodSandboxId:4e5b31150872114be1d538871eb18b3b0278d3af37520e999fc15a8109ffdd77,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441
c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1705370520435491653,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32902683-f363-41df-8423-bea7577187a3,},Annotations:map[string]string{io.kubernetes.container.hash: da08b2e6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dfc947d3eb5cdf0868e3f71df7bd32bdb49426243d717fb56d2b328551507d5e,PodSandboxId:b130eed8c0dc582474c0c87d4bbf61a9fccd614e1673b032113ad8217798ca88,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,Stat
e:CONTAINER_RUNNING,CreatedAt:1705370519801383287,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4jmxg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8677cef8-204f-483d-8ac5-b0d2dc9c4080,},Annotations:map[string]string{io.kubernetes.container.hash: 64685c76,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5fa7609a97a54165bced16a86238cc86c4adad2a9e04cb18ef200bb958d77542,PodSandboxId:3ed54975a39683ac8c6da0c6c54b39f4ffc02d9ed10fa63dfb395c13ffcb7af6,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},},ImageRef:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,State:CONTAINER_R
UNNING,CreatedAt:1705370519907708033,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-9947fc6bf-r5xnq,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 4be158dc-651a-4047-9116-213e19d2a128,},Annotations:map[string]string{io.kubernetes.container.hash: d7bb8097,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d5667db916937ba754f1891989d15fd8046abbbeb8e8fa0bae0ba754ac21a0e,PodSandboxId:08806033f0d0976a9352436fedc879b95354aad35042de981a276f5223738263,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06
651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1705370504692768615,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-hwrh2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e53eefed-07be-4b83-95cd-b7784738a353,},Annotations:map[string]string{io.kubernetes.container.hash: 2346e277,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb314278db69433566e37009477d3b952f8a7df27437a03cfd2a93b308794635,PodSandboxId:439a6dbec9b3c0d8cb7a2aeb527347726372c0ddc3447405022bbd7359a4c4a0,Metadata:&ContainerMetadata{Name
:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1705370479594566320,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-321835,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65399b7be2a104cb06dd5681d16f8c43,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65aae40bc102d501438e317e8d6caa2371e8fa836ca7cd248bb4041e7b30f3d2,PodSandboxId:a4c5aba1652de3809cdf59c095f3648c51ed86576e2afe45d7e89ebde1c78835,Metadata:&ContainerMetadata{Name:etcd,Attempt:0
,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1705370479533264995,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-321835,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 528034ccbe30140109ab63b5d2a10907,},Annotations:map[string]string{io.kubernetes.container.hash: 1dee1bcd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be373cb72d98ba08440a9740ef0b20c76f0f5fefdca2c1ac76d5e9d1dabaa38e,PodSandboxId:dd58e6d3af9fca0fb9cbe75eb932174222c52e7d8f89d1ce8c681a809b9298fb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725
e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1705370479361005741,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-321835,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f27acc5ceef5a184a479396c53e4712c,},Annotations:map[string]string{io.kubernetes.container.hash: c4a2f536,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e64d60eb0faf02f3d4911737a65a20af6648b667943f3e7fbd534db9ac8f19b,PodSandboxId:c1f4b3d486cb6a09797e08b1de8d013b7964fdbd0d1fab8d1ac1633436fa2062,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e718
8be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1705370479303474083,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-321835,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22e86ca4f6d2a172d1015dc4bada229c,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=a9efac2b-fd34-41bd-bfc5-571d0f1df4cb name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 02:06:12 addons-321835 crio[707]: time="2024-01-16 02:06:12.590805853Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="go-grpc-middleware/chain.go:25" id=67a7f96c-b057-4e12-a6c5-c5c28e3fb653 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jan 16 02:06:12 addons-321835 crio[707]: time="2024-01-16 02:06:12.591486849Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:5f939ccb5c70997e07cfa682926bf724c52394b68ce784dd494d6db607169b88,Metadata:&PodSandboxMetadata{Name:hello-world-app-5d77478584-2ct5n,Uid:82c2a0b6-fe05-4fd7-b72a-14ff5a2b5476,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1705370761748634939,Labels:map[string]string{app: hello-world-app,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-world-app-5d77478584-2ct5n,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 82c2a0b6-fe05-4fd7-b72a-14ff5a2b5476,pod-template-hash: 5d77478584,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-16T02:06:01.410526705Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:6b329095dd38c88a82e0a3026e6bd35ddb86bf6c173cba69c0c9806c1e62839d,Metadata:&PodSandboxMetadata{Name:nginx,Uid:d2c94fc6-73dd-4d9a-97ca-3e782e24db68,Namespace:default,Attempt:0,}
,State:SANDBOX_READY,CreatedAt:1705370622769196603,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d2c94fc6-73dd-4d9a-97ca-3e782e24db68,run: nginx,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-16T02:03:41.556707427Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:40b822b69da4903e0ca042ba44f9cb31a243b995f1b9b845a72de822b37c292e,Metadata:&PodSandboxMetadata{Name:headlamp-7ddfbb94ff-6q6k8,Uid:0421c569-8afa-4fcf-9eaf-52b494eb32b6,Namespace:headlamp,Attempt:0,},State:SANDBOX_READY,CreatedAt:1705370607350276397,Labels:map[string]string{app.kubernetes.io/instance: headlamp,app.kubernetes.io/name: headlamp,io.kubernetes.container.name: POD,io.kubernetes.pod.name: headlamp-7ddfbb94ff-6q6k8,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: 0421c569-8afa-4fcf-9eaf-52b494eb32b6,pod-template-hash: 7ddfbb94ff,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-
01-16T02:03:27.017828924Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d04ccd9478f35f2e0ae42159a7f8e2495eff18c25b075a514d1f460e12b8fe60,Metadata:&PodSandboxMetadata{Name:gcp-auth-d4c87556c-98xpz,Uid:3c91513c-73ff-4529-83c2-c9e673c8895c,Namespace:gcp-auth,Attempt:0,},State:SANDBOX_READY,CreatedAt:1705370576145399875,Labels:map[string]string{app: gcp-auth,io.kubernetes.container.name: POD,io.kubernetes.pod.name: gcp-auth-d4c87556c-98xpz,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 3c91513c-73ff-4529-83c2-c9e673c8895c,kubernetes.io/minikube-addons: gcp-auth,pod-template-hash: d4c87556c,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-16T02:01:52.025074584Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:63221b7fbc2cd6bdb5800dacfa742959afbf45fb373b4b82dde34d204b40beb3,Metadata:&PodSandboxMetadata{Name:ingress-nginx-controller-69cff4fd79-dwnts,Uid:707b676e-328d-4817-8f7a-7b39051fb2ee,Namespace:ingress-nginx,Attempt:0,},State:SANDBOX_NOTRE
ADY,CreatedAt:1705370573706539591,Labels:map[string]string{app.kubernetes.io/component: controller,app.kubernetes.io/instance: ingress-nginx,app.kubernetes.io/name: ingress-nginx,gcp-auth-skip-secret: true,io.kubernetes.container.name: POD,io.kubernetes.pod.name: ingress-nginx-controller-69cff4fd79-dwnts,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 707b676e-328d-4817-8f7a-7b39051fb2ee,pod-template-hash: 69cff4fd79,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-16T02:01:48.617577033Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f66a93c787d44f68bb9499ee63062b140585dac0bebe5b5a5cf814ca82d7ecaa,Metadata:&PodSandboxMetadata{Name:ingress-nginx-admission-patch-674tw,Uid:f2c0f09f-382a-4500-bb55-2508391570f2,Namespace:ingress-nginx,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1705370509090133962,Labels:map[string]string{app.kubernetes.io/component: admission-webhook,app.kubernetes.io/instance: ingress-nginx,app.kubernetes.io/name: ingress-nginx,batch.kube
rnetes.io/controller-uid: 467ca0e6-37cd-4016-95b6-dd6935383af6,batch.kubernetes.io/job-name: ingress-nginx-admission-patch,controller-uid: 467ca0e6-37cd-4016-95b6-dd6935383af6,io.kubernetes.container.name: POD,io.kubernetes.pod.name: ingress-nginx-admission-patch-674tw,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: f2c0f09f-382a-4500-bb55-2508391570f2,job-name: ingress-nginx-admission-patch,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-16T02:01:48.747465600Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:780ab56487225152d2b53d19f5b4c49cecc0faa56fa30009ede18663d31e3e2c,Metadata:&PodSandboxMetadata{Name:ingress-nginx-admission-create-g7dqq,Uid:9e996e55-f0e4-4157-8a5c-92789c753c18,Namespace:ingress-nginx,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1705370509038847111,Labels:map[string]string{app.kubernetes.io/component: admission-webhook,app.kubernetes.io/instance: ingress-nginx,app.kubernetes.io/name: ingress-nginx,batch.kubernetes.io/controller-uid:
dda07bd8-6aee-40b0-9561-e60885c73044,batch.kubernetes.io/job-name: ingress-nginx-admission-create,controller-uid: dda07bd8-6aee-40b0-9561-e60885c73044,io.kubernetes.container.name: POD,io.kubernetes.pod.name: ingress-nginx-admission-create-g7dqq,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 9e996e55-f0e4-4157-8a5c-92789c753c18,job-name: ingress-nginx-admission-create,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-16T02:01:48.700527486Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:3ed54975a39683ac8c6da0c6c54b39f4ffc02d9ed10fa63dfb395c13ffcb7af6,Metadata:&PodSandboxMetadata{Name:yakd-dashboard-9947fc6bf-r5xnq,Uid:4be158dc-651a-4047-9116-213e19d2a128,Namespace:yakd-dashboard,Attempt:0,},State:SANDBOX_READY,CreatedAt:1705370507996458879,Labels:map[string]string{app.kubernetes.io/instance: yakd-dashboard,app.kubernetes.io/name: yakd-dashboard,gcp-auth-skip-secret: true,io.kubernetes.container.name: POD,io.kubernetes.pod.name: yakd-dashboard-9947fc6bf-
r5xnq,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 4be158dc-651a-4047-9116-213e19d2a128,pod-template-hash: 9947fc6bf,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-16T02:01:47.487442451Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:4e5b31150872114be1d538871eb18b3b0278d3af37520e999fc15a8109ffdd77,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:32902683-f363-41df-8423-bea7577187a3,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1705370507703293323,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32902683-f363-41df-8423-bea7577187a3,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mo
de\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-01-16T02:01:47.199023449Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:7062f23306ac4ca77af5f809412b5d53b9856c646bef65462f6e773d18cb0204,Metadata:&PodSandboxMetadata{Name:kube-ingress-dns-minikube,Uid:5f31a962-46bd-4b12-8a29-782496d107eb,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1705370506694108241,Labels:map[string]string{app: minikube-ingress-dns,app.kubernetes.io/part-of: kube-system,io.kubern
etes.container.name: POD,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f31a962-46bd-4b12-8a29-782496d107eb,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"minikube-ingress-dns\",\"app.kubernetes.io/part-of\":\"kube-system\"},\"name\":\"kube-ingress-dns-minikube\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"DNS_PORT\",\"value\":\"53\"},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}}],\"image\":\"gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2@sha256:4abe27f9fc03fedab1d655e2020e6b165faf3bf6de1088ce6cf215a75b78f05f\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"minikube-ingress-dns\",\"ports\":[{\"containerPort\":53,\"protocol\":\"UDP\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"minikube-ingress-dns\"}}\n,kubernetes.io/config.seen: 2024-01
-16T02:01:46.340289190Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b130eed8c0dc582474c0c87d4bbf61a9fccd614e1673b032113ad8217798ca88,Metadata:&PodSandboxMetadata{Name:kube-proxy-4jmxg,Uid:8677cef8-204f-483d-8ac5-b0d2dc9c4080,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1705370501326727653,Labels:map[string]string{controller-revision-hash: 8486c7d9cd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-4jmxg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8677cef8-204f-483d-8ac5-b0d2dc9c4080,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-16T02:01:39.497117495Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:08806033f0d0976a9352436fedc879b95354aad35042de981a276f5223738263,Metadata:&PodSandboxMetadata{Name:coredns-5dd5756b68-hwrh2,Uid:e53eefed-07be-4b83-95cd-b7784738a353,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1705370501120037469
,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5dd5756b68-hwrh2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e53eefed-07be-4b83-95cd-b7784738a353,k8s-app: kube-dns,pod-template-hash: 5dd5756b68,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-16T02:01:40.783171320Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:dd58e6d3af9fca0fb9cbe75eb932174222c52e7d8f89d1ce8c681a809b9298fb,Metadata:&PodSandboxMetadata{Name:kube-apiserver-addons-321835,Uid:f27acc5ceef5a184a479396c53e4712c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1705370478679477589,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-addons-321835,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f27acc5ceef5a184a479396c53e4712c,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168
.39.11:8443,kubernetes.io/config.hash: f27acc5ceef5a184a479396c53e4712c,kubernetes.io/config.seen: 2024-01-16T02:01:18.116057692Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:a4c5aba1652de3809cdf59c095f3648c51ed86576e2afe45d7e89ebde1c78835,Metadata:&PodSandboxMetadata{Name:etcd-addons-321835,Uid:528034ccbe30140109ab63b5d2a10907,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1705370478650220609,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-addons-321835,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 528034ccbe30140109ab63b5d2a10907,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.11:2379,kubernetes.io/config.hash: 528034ccbe30140109ab63b5d2a10907,kubernetes.io/config.seen: 2024-01-16T02:01:18.116063324Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:439a6dbec9b3c0d8cb7a2aeb527347726372c0ddc3447405022bbd7
359a4c4a0,Metadata:&PodSandboxMetadata{Name:kube-scheduler-addons-321835,Uid:65399b7be2a104cb06dd5681d16f8c43,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1705370478628062546,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-addons-321835,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65399b7be2a104cb06dd5681d16f8c43,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 65399b7be2a104cb06dd5681d16f8c43,kubernetes.io/config.seen: 2024-01-16T02:01:18.116062481Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c1f4b3d486cb6a09797e08b1de8d013b7964fdbd0d1fab8d1ac1633436fa2062,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-addons-321835,Uid:22e86ca4f6d2a172d1015dc4bada229c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1705370478622736653,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.k
ubernetes.pod.name: kube-controller-manager-addons-321835,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22e86ca4f6d2a172d1015dc4bada229c,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 22e86ca4f6d2a172d1015dc4bada229c,kubernetes.io/config.seen: 2024-01-16T02:01:18.116061523Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=67a7f96c-b057-4e12-a6c5-c5c28e3fb653 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jan 16 02:06:12 addons-321835 crio[707]: time="2024-01-16 02:06:12.596056752Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=0944181e-630c-4d9e-b4e6-548c04926354 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 02:06:12 addons-321835 crio[707]: time="2024-01-16 02:06:12.596120405Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=0944181e-630c-4d9e-b4e6-548c04926354 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 02:06:12 addons-321835 crio[707]: time="2024-01-16 02:06:12.596427017Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:75e4a22d250a807405ba7bc63298bd143bd91b59aac9827c7e544cb170274f36,PodSandboxId:5f939ccb5c70997e07cfa682926bf724c52394b68ce784dd494d6db607169b88,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1705370764008685646,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-2ct5n,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 82c2a0b6-fe05-4fd7-b72a-14ff5a2b5476,},Annotations:map[string]string{io.kubernetes.container.hash: f32f9bee,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:716e6750c2b6733a18807dc224c232d203ea549f8908b2ae2f142d8fe00a140b,PodSandboxId:6b329095dd38c88a82e0a3026e6bd35ddb86bf6c173cba69c0c9806c1e62839d,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686,State:CONTAINER_RUNNING,CreatedAt:1705370625908289528,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d2c94fc6-73dd-4d9a-97ca-3e782e24db68,},Annotations:map[string]string{io.kubernet
es.container.hash: 5589d21e,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d419fdc07bd782ef8df18acf1e67402d3d37dc20c603ee652445f21b45ff64a,PodSandboxId:40b822b69da4903e0ca042ba44f9cb31a243b995f1b9b845a72de822b37c292e,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:3c6da859a989f285b2fd2ac2f4763d1884d54a51e4405301e5324e0b2b70bd67,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:3c6da859a989f285b2fd2ac2f4763d1884d54a51e4405301e5324e0b2b70bd67,State:CONTAINER_RUNNING,CreatedAt:1705370613093833316,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7ddfbb94ff-6q6k8,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.u
id: 0421c569-8afa-4fcf-9eaf-52b494eb32b6,},Annotations:map[string]string{io.kubernetes.container.hash: 4f7de0e7,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb275e11af419d4003871605cf03669ff5d6c0e860eda1fe49d846e07abaecee,PodSandboxId:d04ccd9478f35f2e0ae42159a7f8e2495eff18c25b075a514d1f460e12b8fe60,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1705370587407463076,Labels:map[string]string{io.kubernetes.container.name
: gcp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-98xpz,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 3c91513c-73ff-4529-83c2-c9e673c8895c,},Annotations:map[string]string{io.kubernetes.container.hash: a2478fac,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9161d7693da758ede3cfc95e4dd427ca0ff145bf4ee729ce45536d3e7dabc161,PodSandboxId:f66a93c787d44f68bb9499ee63062b140585dac0bebe5b5a5cf814ca82d7ecaa,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:1ebff0f9671bc015dc340b12c5bf6f3dbda7d0a8b5332bd095f21bd52e1b30fb,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:17053705
82090615243,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-674tw,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: f2c0f09f-382a-4500-bb55-2508391570f2,},Annotations:map[string]string{io.kubernetes.container.hash: 85d2eb0c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:934d3fcbbaa6080d1d96c50654e260db8bf17575334282191a2d2a5890f26ebb,PodSandboxId:780ab56487225152d2b53d19f5b4c49cecc0faa56fa30009ede18663d31e3e2c,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c59
65b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1705370570288756575,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-g7dqq,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 9e996e55-f0e4-4157-8a5c-92789c753c18,},Annotations:map[string]string{io.kubernetes.container.hash: 2fe4ea98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ca065a8248d23682515a3eaae7869054410a53e52e95d0cf09e99501ccb8ec0,PodSandboxId:4e5b31150872114be1d538871eb18b3b0278d3af37520e999fc15a8109ffdd77,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c44
1c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705370555238311887,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32902683-f363-41df-8423-bea7577187a3,},Annotations:map[string]string{io.kubernetes.container.hash: da08b2e6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2023ec465ebbecacba60acb2a92941abd467064b72bff41ccbb0725f8534e8a5,PodSandboxId:4e5b31150872114be1d538871eb18b3b0278d3af37520e999fc15a8109ffdd77,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441
c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1705370520435491653,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32902683-f363-41df-8423-bea7577187a3,},Annotations:map[string]string{io.kubernetes.container.hash: da08b2e6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dfc947d3eb5cdf0868e3f71df7bd32bdb49426243d717fb56d2b328551507d5e,PodSandboxId:b130eed8c0dc582474c0c87d4bbf61a9fccd614e1673b032113ad8217798ca88,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,Stat
e:CONTAINER_RUNNING,CreatedAt:1705370519801383287,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4jmxg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8677cef8-204f-483d-8ac5-b0d2dc9c4080,},Annotations:map[string]string{io.kubernetes.container.hash: 64685c76,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5fa7609a97a54165bced16a86238cc86c4adad2a9e04cb18ef200bb958d77542,PodSandboxId:3ed54975a39683ac8c6da0c6c54b39f4ffc02d9ed10fa63dfb395c13ffcb7af6,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},},ImageRef:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,State:CONTAINER_R
UNNING,CreatedAt:1705370519907708033,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-9947fc6bf-r5xnq,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 4be158dc-651a-4047-9116-213e19d2a128,},Annotations:map[string]string{io.kubernetes.container.hash: d7bb8097,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d5667db916937ba754f1891989d15fd8046abbbeb8e8fa0bae0ba754ac21a0e,PodSandboxId:08806033f0d0976a9352436fedc879b95354aad35042de981a276f5223738263,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06
651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1705370504692768615,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-hwrh2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e53eefed-07be-4b83-95cd-b7784738a353,},Annotations:map[string]string{io.kubernetes.container.hash: 2346e277,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb314278db69433566e37009477d3b952f8a7df27437a03cfd2a93b308794635,PodSandboxId:439a6dbec9b3c0d8cb7a2aeb527347726372c0ddc3447405022bbd7359a4c4a0,Metadata:&ContainerMetadata{Name
:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1705370479594566320,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-321835,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65399b7be2a104cb06dd5681d16f8c43,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65aae40bc102d501438e317e8d6caa2371e8fa836ca7cd248bb4041e7b30f3d2,PodSandboxId:a4c5aba1652de3809cdf59c095f3648c51ed86576e2afe45d7e89ebde1c78835,Metadata:&ContainerMetadata{Name:etcd,Attempt:0
,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1705370479533264995,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-321835,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 528034ccbe30140109ab63b5d2a10907,},Annotations:map[string]string{io.kubernetes.container.hash: 1dee1bcd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be373cb72d98ba08440a9740ef0b20c76f0f5fefdca2c1ac76d5e9d1dabaa38e,PodSandboxId:dd58e6d3af9fca0fb9cbe75eb932174222c52e7d8f89d1ce8c681a809b9298fb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725
e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1705370479361005741,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-321835,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f27acc5ceef5a184a479396c53e4712c,},Annotations:map[string]string{io.kubernetes.container.hash: c4a2f536,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e64d60eb0faf02f3d4911737a65a20af6648b667943f3e7fbd534db9ac8f19b,PodSandboxId:c1f4b3d486cb6a09797e08b1de8d013b7964fdbd0d1fab8d1ac1633436fa2062,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e718
8be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1705370479303474083,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-321835,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22e86ca4f6d2a172d1015dc4bada229c,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=0944181e-630c-4d9e-b4e6-548c04926354 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	75e4a22d250a8       gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7                      8 seconds ago       Running             hello-world-app           0                   5f939ccb5c709       hello-world-app-5d77478584-2ct5n
	716e6750c2b67       docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686                              2 minutes ago       Running             nginx                     0                   6b329095dd38c       nginx
	1d419fdc07bd7       ghcr.io/headlamp-k8s/headlamp@sha256:3c6da859a989f285b2fd2ac2f4763d1884d54a51e4405301e5324e0b2b70bd67                        2 minutes ago       Running             headlamp                  0                   40b822b69da49       headlamp-7ddfbb94ff-6q6k8
	bb275e11af419       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06                 3 minutes ago       Running             gcp-auth                  0                   d04ccd9478f35       gcp-auth-d4c87556c-98xpz
	9161d7693da75       1ebff0f9671bc015dc340b12c5bf6f3dbda7d0a8b5332bd095f21bd52e1b30fb                                                             3 minutes ago       Exited              patch                     2                   f66a93c787d44       ingress-nginx-admission-patch-674tw
	934d3fcbbaa60       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385   3 minutes ago       Exited              create                    0                   780ab56487225       ingress-nginx-admission-create-g7dqq
	5ca065a8248d2       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             3 minutes ago       Running             storage-provisioner       1                   4e5b311508721       storage-provisioner
	2023ec465ebbe       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago       Exited              storage-provisioner       0                   4e5b311508721       storage-provisioner
	5fa7609a97a54       docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                              4 minutes ago       Running             yakd                      0                   3ed54975a3968       yakd-dashboard-9947fc6bf-r5xnq
	dfc947d3eb5cd       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                                             4 minutes ago       Running             kube-proxy                0                   b130eed8c0dc5       kube-proxy-4jmxg
	9d5667db91693       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                                             4 minutes ago       Running             coredns                   0                   08806033f0d09       coredns-5dd5756b68-hwrh2
	fb314278db694       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                                             4 minutes ago       Running             kube-scheduler            0                   439a6dbec9b3c       kube-scheduler-addons-321835
	65aae40bc102d       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                                             4 minutes ago       Running             etcd                      0                   a4c5aba1652de       etcd-addons-321835
	be373cb72d98b       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                                             4 minutes ago       Running             kube-apiserver            0                   dd58e6d3af9fc       kube-apiserver-addons-321835
	9e64d60eb0faf       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                                             4 minutes ago       Running             kube-controller-manager   0                   c1f4b3d486cb6       kube-controller-manager-addons-321835
	
	
	==> coredns [9d5667db916937ba754f1891989d15fd8046abbbeb8e8fa0bae0ba754ac21a0e] <==
	[INFO] 10.244.0.7:40894 - 14722 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000171795s
	[INFO] 10.244.0.7:51272 - 16697 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00004321s
	[INFO] 10.244.0.7:51272 - 15156 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000034773s
	[INFO] 10.244.0.7:53995 - 31561 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000041626s
	[INFO] 10.244.0.7:53995 - 20299 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000038s
	[INFO] 10.244.0.7:36930 - 63418 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000042529s
	[INFO] 10.244.0.7:36930 - 46520 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000035695s
	[INFO] 10.244.0.7:35569 - 48169 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000075114s
	[INFO] 10.244.0.7:35569 - 14378 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000044598s
	[INFO] 10.244.0.7:35629 - 56438 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00029395s
	[INFO] 10.244.0.7:35629 - 53617 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000314809s
	[INFO] 10.244.0.7:55907 - 55986 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000100579s
	[INFO] 10.244.0.7:55907 - 701 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.001880694s
	[INFO] 10.244.0.7:52651 - 11030 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00004944s
	[INFO] 10.244.0.7:52651 - 40208 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000336218s
	[INFO] 10.244.0.21:54914 - 56728 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000293438s
	[INFO] 10.244.0.21:34478 - 41401 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000122637s
	[INFO] 10.244.0.21:53265 - 44204 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000152339s
	[INFO] 10.244.0.21:34622 - 47198 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000064038s
	[INFO] 10.244.0.21:49980 - 36790 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000057355s
	[INFO] 10.244.0.21:51852 - 60048 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000136737s
	[INFO] 10.244.0.21:39523 - 20200 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000615754s
	[INFO] 10.244.0.21:49707 - 5753 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 382 0.000513241s
	[INFO] 10.244.0.25:60589 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000355084s
	[INFO] 10.244.0.25:40242 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000397184s
	
	
	==> describe nodes <==
	Name:               addons-321835
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-321835
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=880ec6d67bbc7fe8882aaa6543d5a07427c973fd
	                    minikube.k8s.io/name=addons-321835
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_16T02_01_27_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-321835
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Jan 2024 02:01:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-321835
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Jan 2024 02:06:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Jan 2024 02:04:01 +0000   Tue, 16 Jan 2024 02:01:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Jan 2024 02:04:01 +0000   Tue, 16 Jan 2024 02:01:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Jan 2024 02:04:01 +0000   Tue, 16 Jan 2024 02:01:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Jan 2024 02:04:01 +0000   Tue, 16 Jan 2024 02:01:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.11
	  Hostname:    addons-321835
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             3914496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             3914496Ki
	  pods:               110
	System Info:
	  Machine ID:                 9ce1ce564e3c47efb1b0aea3c3ebc457
	  System UUID:                9ce1ce56-4e3c-47ef-b1b0-aea3c3ebc457
	  Boot ID:                    bf685a93-cc1b-438b-bbff-ad36e5c25713
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5d77478584-2ct5n         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m31s
	  gcp-auth                    gcp-auth-d4c87556c-98xpz                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m21s
	  headlamp                    headlamp-7ddfbb94ff-6q6k8                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m46s
	  kube-system                 coredns-5dd5756b68-hwrh2                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     4m33s
	  kube-system                 etcd-addons-321835                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         4m45s
	  kube-system                 kube-apiserver-addons-321835             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m45s
	  kube-system                 kube-controller-manager-addons-321835    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m45s
	  kube-system                 kube-proxy-4jmxg                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m33s
	  kube-system                 kube-scheduler-addons-321835             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m45s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m25s
	  yakd-dashboard              yakd-dashboard-9947fc6bf-r5xnq           0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     4m25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             298Mi (7%!)(MISSING)  426Mi (11%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m10s                  kube-proxy       
	  Normal  Starting                 4m54s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m54s (x8 over 4m54s)  kubelet          Node addons-321835 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m54s (x8 over 4m54s)  kubelet          Node addons-321835 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m54s (x7 over 4m54s)  kubelet          Node addons-321835 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m54s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 4m45s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m45s                  kubelet          Node addons-321835 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m45s                  kubelet          Node addons-321835 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m45s                  kubelet          Node addons-321835 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m45s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                4m45s                  kubelet          Node addons-321835 status is now: NodeReady
	  Normal  RegisteredNode           4m34s                  node-controller  Node addons-321835 event: Registered Node addons-321835 in Controller
	
	
	==> dmesg <==
	[  +2.738446] systemd-fstab-generator[113]: Ignoring "noauto" for root device
	[  +0.148432] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +5.028492] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[Jan16 02:01] systemd-fstab-generator[631]: Ignoring "noauto" for root device
	[  +0.106023] systemd-fstab-generator[642]: Ignoring "noauto" for root device
	[  +0.141540] systemd-fstab-generator[655]: Ignoring "noauto" for root device
	[  +0.100705] systemd-fstab-generator[666]: Ignoring "noauto" for root device
	[  +0.214819] systemd-fstab-generator[690]: Ignoring "noauto" for root device
	[  +9.953352] systemd-fstab-generator[901]: Ignoring "noauto" for root device
	[  +9.778926] systemd-fstab-generator[1240]: Ignoring "noauto" for root device
	[ +25.108795] kauditd_printk_skb: 64 callbacks suppressed
	[Jan16 02:02] kauditd_printk_skb: 14 callbacks suppressed
	[  +8.136205] kauditd_printk_skb: 20 callbacks suppressed
	[Jan16 02:03] kauditd_printk_skb: 41 callbacks suppressed
	[  +9.168465] kauditd_printk_skb: 4 callbacks suppressed
	[  +5.043163] kauditd_printk_skb: 28 callbacks suppressed
	[  +5.727094] kauditd_printk_skb: 15 callbacks suppressed
	[  +5.696575] kauditd_printk_skb: 31 callbacks suppressed
	[ +26.016802] kauditd_printk_skb: 3 callbacks suppressed
	[Jan16 02:04] kauditd_printk_skb: 2 callbacks suppressed
	[ +20.842822] kauditd_printk_skb: 12 callbacks suppressed
	[Jan16 02:06] kauditd_printk_skb: 5 callbacks suppressed
	
	
	==> etcd [65aae40bc102d501438e317e8d6caa2371e8fa836ca7cd248bb4041e7b30f3d2] <==
	{"level":"warn","ts":"2024-01-16T02:03:03.254461Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-16T02:03:02.849951Z","time spent":"404.367516ms","remote":"127.0.0.1:51922","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1135 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2024-01-16T02:03:03.254484Z","caller":"traceutil/trace.go:171","msg":"trace[1524017946] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1145; }","duration":"342.223455ms","start":"2024-01-16T02:03:02.912253Z","end":"2024-01-16T02:03:03.254476Z","steps":["trace[1524017946] 'agreement among raft nodes before linearized reading'  (duration: 341.974524ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-16T02:03:03.25458Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-16T02:03:02.91223Z","time spent":"342.340465ms","remote":"127.0.0.1:51926","response type":"/etcdserverpb.KV/Range","request count":0,"request size":58,"response count":18,"response size":82464,"request content":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" "}
	{"level":"warn","ts":"2024-01-16T02:03:03.254699Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"243.368589ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-01-16T02:03:03.25476Z","caller":"traceutil/trace.go:171","msg":"trace[1569441179] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1145; }","duration":"243.42857ms","start":"2024-01-16T02:03:03.011323Z","end":"2024-01-16T02:03:03.254752Z","steps":["trace[1569441179] 'agreement among raft nodes before linearized reading'  (duration: 243.353794ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-16T02:03:03.254991Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"153.239629ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:11286"}
	{"level":"info","ts":"2024-01-16T02:03:03.255041Z","caller":"traceutil/trace.go:171","msg":"trace[93595028] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1145; }","duration":"153.293527ms","start":"2024-01-16T02:03:03.101741Z","end":"2024-01-16T02:03:03.255034Z","steps":["trace[93595028] 'agreement among raft nodes before linearized reading'  (duration: 153.124241ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-16T02:03:03.255117Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"169.634804ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/ingress-nginx-admission-patch-674tw\" ","response":"range_response_count:1 size:4454"}
	{"level":"info","ts":"2024-01-16T02:03:03.255138Z","caller":"traceutil/trace.go:171","msg":"trace[1991320402] range","detail":"{range_begin:/registry/pods/ingress-nginx/ingress-nginx-admission-patch-674tw; range_end:; response_count:1; response_revision:1145; }","duration":"169.655689ms","start":"2024-01-16T02:03:03.085475Z","end":"2024-01-16T02:03:03.25513Z","steps":["trace[1991320402] 'agreement among raft nodes before linearized reading'  (duration: 169.616992ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-16T02:03:09.420824Z","caller":"traceutil/trace.go:171","msg":"trace[1422416563] linearizableReadLoop","detail":"{readStateIndex:1237; appliedIndex:1236; }","duration":"110.076437ms","start":"2024-01-16T02:03:09.310736Z","end":"2024-01-16T02:03:09.420812Z","steps":["trace[1422416563] 'read index received'  (duration: 109.93391ms)","trace[1422416563] 'applied index is now lower than readState.Index'  (duration: 141.876µs)"],"step_count":2}
	{"level":"info","ts":"2024-01-16T02:03:09.421157Z","caller":"traceutil/trace.go:171","msg":"trace[374844891] transaction","detail":"{read_only:false; response_revision:1201; number_of_response:1; }","duration":"227.172028ms","start":"2024-01-16T02:03:09.193974Z","end":"2024-01-16T02:03:09.421146Z","steps":["trace[374844891] 'process raft request'  (duration: 226.737807ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-16T02:03:09.421307Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"110.568863ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2024-01-16T02:03:09.421361Z","caller":"traceutil/trace.go:171","msg":"trace[1793638182] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1201; }","duration":"110.645895ms","start":"2024-01-16T02:03:09.310706Z","end":"2024-01-16T02:03:09.421352Z","steps":["trace[1793638182] 'agreement among raft nodes before linearized reading'  (duration: 110.545224ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-16T02:03:11.592174Z","caller":"traceutil/trace.go:171","msg":"trace[980272541] linearizableReadLoop","detail":"{readStateIndex:1247; appliedIndex:1246; }","duration":"121.820805ms","start":"2024-01-16T02:03:11.470339Z","end":"2024-01-16T02:03:11.59216Z","steps":["trace[980272541] 'read index received'  (duration: 121.645373ms)","trace[980272541] 'applied index is now lower than readState.Index'  (duration: 174.931µs)"],"step_count":2}
	{"level":"warn","ts":"2024-01-16T02:03:11.592341Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"122.004533ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/persistentvolumes/\" range_end:\"/registry/persistentvolumes0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-01-16T02:03:11.592387Z","caller":"traceutil/trace.go:171","msg":"trace[702074772] range","detail":"{range_begin:/registry/persistentvolumes/; range_end:/registry/persistentvolumes0; response_count:0; response_revision:1210; }","duration":"122.072367ms","start":"2024-01-16T02:03:11.470308Z","end":"2024-01-16T02:03:11.59238Z","steps":["trace[702074772] 'agreement among raft nodes before linearized reading'  (duration: 121.931259ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-16T02:03:11.592591Z","caller":"traceutil/trace.go:171","msg":"trace[746533923] transaction","detail":"{read_only:false; response_revision:1210; number_of_response:1; }","duration":"154.376012ms","start":"2024-01-16T02:03:11.438198Z","end":"2024-01-16T02:03:11.592574Z","steps":["trace[746533923] 'process raft request'  (duration: 153.833331ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-16T02:03:31.379503Z","caller":"traceutil/trace.go:171","msg":"trace[2023274629] linearizableReadLoop","detail":"{readStateIndex:1478; appliedIndex:1477; }","duration":"180.25283ms","start":"2024-01-16T02:03:31.199228Z","end":"2024-01-16T02:03:31.37948Z","steps":["trace[2023274629] 'read index received'  (duration: 180.139992ms)","trace[2023274629] 'applied index is now lower than readState.Index'  (duration: 112.378µs)"],"step_count":2}
	{"level":"warn","ts":"2024-01-16T02:03:31.379954Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"180.644097ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/registry-proxy-q7nd5\" ","response":"range_response_count:1 size:3870"}
	{"level":"info","ts":"2024-01-16T02:03:31.380674Z","caller":"traceutil/trace.go:171","msg":"trace[464480277] range","detail":"{range_begin:/registry/pods/kube-system/registry-proxy-q7nd5; range_end:; response_count:1; response_revision:1432; }","duration":"181.459605ms","start":"2024-01-16T02:03:31.199201Z","end":"2024-01-16T02:03:31.380661Z","steps":["trace[464480277] 'agreement among raft nodes before linearized reading'  (duration: 180.594309ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-16T02:03:42.143191Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"132.142628ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-01-16T02:03:42.143319Z","caller":"traceutil/trace.go:171","msg":"trace[877476336] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1523; }","duration":"132.308548ms","start":"2024-01-16T02:03:42.010999Z","end":"2024-01-16T02:03:42.143307Z","steps":["trace[877476336] 'range keys from in-memory index tree'  (duration: 131.895493ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-16T02:03:42.143742Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"201.403007ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2024-01-16T02:03:42.143811Z","caller":"traceutil/trace.go:171","msg":"trace[1011393339] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1523; }","duration":"201.477877ms","start":"2024-01-16T02:03:41.942323Z","end":"2024-01-16T02:03:42.143801Z","steps":["trace[1011393339] 'range keys from in-memory index tree'  (duration: 201.320996ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-16T02:03:47.667792Z","caller":"traceutil/trace.go:171","msg":"trace[1201297592] transaction","detail":"{read_only:false; response_revision:1543; number_of_response:1; }","duration":"173.941437ms","start":"2024-01-16T02:03:47.493836Z","end":"2024-01-16T02:03:47.667777Z","steps":["trace[1201297592] 'process raft request'  (duration: 173.555372ms)"],"step_count":1}
	
	
	==> gcp-auth [bb275e11af419d4003871605cf03669ff5d6c0e860eda1fe49d846e07abaecee] <==
	2024/01/16 02:03:07 GCP Auth Webhook started!
	2024/01/16 02:03:12 Ready to marshal response ...
	2024/01/16 02:03:12 Ready to write response ...
	2024/01/16 02:03:12 Ready to marshal response ...
	2024/01/16 02:03:12 Ready to write response ...
	2024/01/16 02:03:22 Ready to marshal response ...
	2024/01/16 02:03:22 Ready to write response ...
	2024/01/16 02:03:23 Ready to marshal response ...
	2024/01/16 02:03:23 Ready to write response ...
	2024/01/16 02:03:26 Ready to marshal response ...
	2024/01/16 02:03:26 Ready to write response ...
	2024/01/16 02:03:26 Ready to marshal response ...
	2024/01/16 02:03:26 Ready to write response ...
	2024/01/16 02:03:26 Ready to marshal response ...
	2024/01/16 02:03:26 Ready to write response ...
	2024/01/16 02:03:36 Ready to marshal response ...
	2024/01/16 02:03:36 Ready to write response ...
	2024/01/16 02:03:41 Ready to marshal response ...
	2024/01/16 02:03:41 Ready to write response ...
	2024/01/16 02:03:51 Ready to marshal response ...
	2024/01/16 02:03:51 Ready to write response ...
	2024/01/16 02:04:07 Ready to marshal response ...
	2024/01/16 02:04:07 Ready to write response ...
	2024/01/16 02:06:01 Ready to marshal response ...
	2024/01/16 02:06:01 Ready to write response ...
	
	
	==> kernel <==
	 02:06:13 up 5 min,  0 users,  load average: 0.50, 1.86, 1.02
	Linux addons-321835 5.10.57 #1 SMP Thu Dec 28 22:04:21 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [be373cb72d98ba08440a9740ef0b20c76f0f5fefdca2c1ac76d5e9d1dabaa38e] <==
	W0116 02:03:36.515254       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	E0116 02:03:38.448122       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0116 02:03:41.400399       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	I0116 02:03:41.632168       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.104.196.241"}
	I0116 02:03:48.924284       1 controller.go:624] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0116 02:04:22.807082       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0116 02:04:22.807245       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0116 02:04:22.817650       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0116 02:04:22.817724       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0116 02:04:22.830393       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0116 02:04:22.830486       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0116 02:04:22.848172       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0116 02:04:22.848237       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0116 02:04:22.895700       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0116 02:04:22.895876       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0116 02:04:22.945149       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0116 02:04:22.945302       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0116 02:04:22.971771       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0116 02:04:22.971869       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0116 02:04:22.976449       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0116 02:04:22.976501       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0116 02:04:23.830731       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0116 02:04:23.977388       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0116 02:04:23.990864       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0116 02:06:01.676651       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.101.61.152"}
	
	
	==> kube-controller-manager [9e64d60eb0faf02f3d4911737a65a20af6648b667943f3e7fbd534db9ac8f19b] <==
	W0116 02:05:03.169125       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0116 02:05:03.169273       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0116 02:05:24.196380       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0116 02:05:24.196462       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0116 02:05:24.585359       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0116 02:05:24.585451       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0116 02:05:35.159310       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0116 02:05:35.159489       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0116 02:05:54.175220       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0116 02:05:54.175290       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0116 02:06:01.346669       1 event.go:307] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-5d77478584 to 1"
	I0116 02:06:01.391285       1 event.go:307] "Event occurred" object="default/hello-world-app-5d77478584" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-5d77478584-2ct5n"
	I0116 02:06:01.405456       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="58.624104ms"
	I0116 02:06:01.428806       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="23.250591ms"
	I0116 02:06:01.428978       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="103.237µs"
	I0116 02:06:01.445698       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="67.977µs"
	I0116 02:06:04.400389       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I0116 02:06:04.416854       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	I0116 02:06:04.423417       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-69cff4fd79" duration="5.717µs"
	I0116 02:06:04.603041       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="20.165877ms"
	I0116 02:06:04.603160       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="65.427µs"
	W0116 02:06:12.681682       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0116 02:06:12.681762       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0116 02:06:12.795284       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0116 02:06:12.795339       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	
	==> kube-proxy [dfc947d3eb5cdf0868e3f71df7bd32bdb49426243d717fb56d2b328551507d5e] <==
	I0116 02:02:01.325557       1 server_others.go:69] "Using iptables proxy"
	I0116 02:02:01.422455       1 node.go:141] Successfully retrieved node IP: 192.168.39.11
	I0116 02:02:02.587463       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0116 02:02:02.587513       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0116 02:02:02.610809       1 server_others.go:152] "Using iptables Proxier"
	I0116 02:02:02.616544       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0116 02:02:02.658638       1 server.go:846] "Version info" version="v1.28.4"
	I0116 02:02:02.658657       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0116 02:02:02.666974       1 config.go:188] "Starting service config controller"
	I0116 02:02:02.667024       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0116 02:02:02.667048       1 config.go:97] "Starting endpoint slice config controller"
	I0116 02:02:02.667872       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0116 02:02:02.676340       1 config.go:315] "Starting node config controller"
	I0116 02:02:02.676356       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0116 02:02:02.775370       1 shared_informer.go:318] Caches are synced for service config
	I0116 02:02:02.775678       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0116 02:02:02.776964       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [fb314278db69433566e37009477d3b952f8a7df27437a03cfd2a93b308794635] <==
	W0116 02:01:24.531338       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0116 02:01:24.531404       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0116 02:01:24.539863       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0116 02:01:24.540035       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0116 02:01:24.598168       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0116 02:01:24.598406       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0116 02:01:24.651001       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0116 02:01:24.651104       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0116 02:01:24.771802       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0116 02:01:24.772005       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0116 02:01:24.782786       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0116 02:01:24.783010       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0116 02:01:24.807024       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0116 02:01:24.807132       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0116 02:01:24.821627       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0116 02:01:24.821772       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0116 02:01:24.867253       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0116 02:01:24.867344       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0116 02:01:24.920093       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0116 02:01:24.920275       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0116 02:01:25.050478       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0116 02:01:25.050614       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0116 02:01:25.051741       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0116 02:01:25.051846       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0116 02:01:26.931392       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Tue 2024-01-16 02:00:52 UTC, ends at Tue 2024-01-16 02:06:13 UTC. --
	Jan 16 02:06:01 addons-321835 kubelet[1247]: I0116 02:06:01.411437    1247 memory_manager.go:346] "RemoveStaleState removing state" podUID="e97162d6-3423-4069-8aec-d551ad67a0f2" containerName="csi-snapshotter"
	Jan 16 02:06:01 addons-321835 kubelet[1247]: I0116 02:06:01.411443    1247 memory_manager.go:346] "RemoveStaleState removing state" podUID="e97162d6-3423-4069-8aec-d551ad67a0f2" containerName="csi-external-health-monitor-controller"
	Jan 16 02:06:01 addons-321835 kubelet[1247]: I0116 02:06:01.474364    1247 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sj7kd\" (UniqueName: \"kubernetes.io/projected/82c2a0b6-fe05-4fd7-b72a-14ff5a2b5476-kube-api-access-sj7kd\") pod \"hello-world-app-5d77478584-2ct5n\" (UID: \"82c2a0b6-fe05-4fd7-b72a-14ff5a2b5476\") " pod="default/hello-world-app-5d77478584-2ct5n"
	Jan 16 02:06:01 addons-321835 kubelet[1247]: I0116 02:06:01.474455    1247 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/82c2a0b6-fe05-4fd7-b72a-14ff5a2b5476-gcp-creds\") pod \"hello-world-app-5d77478584-2ct5n\" (UID: \"82c2a0b6-fe05-4fd7-b72a-14ff5a2b5476\") " pod="default/hello-world-app-5d77478584-2ct5n"
	Jan 16 02:06:02 addons-321835 kubelet[1247]: I0116 02:06:02.993089    1247 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mrpch\" (UniqueName: \"kubernetes.io/projected/5f31a962-46bd-4b12-8a29-782496d107eb-kube-api-access-mrpch\") pod \"5f31a962-46bd-4b12-8a29-782496d107eb\" (UID: \"5f31a962-46bd-4b12-8a29-782496d107eb\") "
	Jan 16 02:06:02 addons-321835 kubelet[1247]: I0116 02:06:02.999254    1247 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f31a962-46bd-4b12-8a29-782496d107eb-kube-api-access-mrpch" (OuterVolumeSpecName: "kube-api-access-mrpch") pod "5f31a962-46bd-4b12-8a29-782496d107eb" (UID: "5f31a962-46bd-4b12-8a29-782496d107eb"). InnerVolumeSpecName "kube-api-access-mrpch". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jan 16 02:06:03 addons-321835 kubelet[1247]: I0116 02:06:03.095107    1247 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-mrpch\" (UniqueName: \"kubernetes.io/projected/5f31a962-46bd-4b12-8a29-782496d107eb-kube-api-access-mrpch\") on node \"addons-321835\" DevicePath \"\""
	Jan 16 02:06:03 addons-321835 kubelet[1247]: I0116 02:06:03.535536    1247 scope.go:117] "RemoveContainer" containerID="9374dadf1dc1242e15b12770a4c88b2fae0ae04e6be5afc490961e31d5a2d423"
	Jan 16 02:06:03 addons-321835 kubelet[1247]: I0116 02:06:03.618818    1247 scope.go:117] "RemoveContainer" containerID="9374dadf1dc1242e15b12770a4c88b2fae0ae04e6be5afc490961e31d5a2d423"
	Jan 16 02:06:03 addons-321835 kubelet[1247]: I0116 02:06:03.621235    1247 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="5f31a962-46bd-4b12-8a29-782496d107eb" path="/var/lib/kubelet/pods/5f31a962-46bd-4b12-8a29-782496d107eb/volumes"
	Jan 16 02:06:03 addons-321835 kubelet[1247]: E0116 02:06:03.621748    1247 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9374dadf1dc1242e15b12770a4c88b2fae0ae04e6be5afc490961e31d5a2d423\": container with ID starting with 9374dadf1dc1242e15b12770a4c88b2fae0ae04e6be5afc490961e31d5a2d423 not found: ID does not exist" containerID="9374dadf1dc1242e15b12770a4c88b2fae0ae04e6be5afc490961e31d5a2d423"
	Jan 16 02:06:03 addons-321835 kubelet[1247]: I0116 02:06:03.621828    1247 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9374dadf1dc1242e15b12770a4c88b2fae0ae04e6be5afc490961e31d5a2d423"} err="failed to get container status \"9374dadf1dc1242e15b12770a4c88b2fae0ae04e6be5afc490961e31d5a2d423\": rpc error: code = NotFound desc = could not find container \"9374dadf1dc1242e15b12770a4c88b2fae0ae04e6be5afc490961e31d5a2d423\": container with ID starting with 9374dadf1dc1242e15b12770a4c88b2fae0ae04e6be5afc490961e31d5a2d423 not found: ID does not exist"
	Jan 16 02:06:05 addons-321835 kubelet[1247]: I0116 02:06:05.613020    1247 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="9e996e55-f0e4-4157-8a5c-92789c753c18" path="/var/lib/kubelet/pods/9e996e55-f0e4-4157-8a5c-92789c753c18/volumes"
	Jan 16 02:06:05 addons-321835 kubelet[1247]: I0116 02:06:05.613455    1247 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="f2c0f09f-382a-4500-bb55-2508391570f2" path="/var/lib/kubelet/pods/f2c0f09f-382a-4500-bb55-2508391570f2/volumes"
	Jan 16 02:06:07 addons-321835 kubelet[1247]: I0116 02:06:07.834593    1247 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/707b676e-328d-4817-8f7a-7b39051fb2ee-webhook-cert\") pod \"707b676e-328d-4817-8f7a-7b39051fb2ee\" (UID: \"707b676e-328d-4817-8f7a-7b39051fb2ee\") "
	Jan 16 02:06:07 addons-321835 kubelet[1247]: I0116 02:06:07.834662    1247 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h6hbv\" (UniqueName: \"kubernetes.io/projected/707b676e-328d-4817-8f7a-7b39051fb2ee-kube-api-access-h6hbv\") pod \"707b676e-328d-4817-8f7a-7b39051fb2ee\" (UID: \"707b676e-328d-4817-8f7a-7b39051fb2ee\") "
	Jan 16 02:06:07 addons-321835 kubelet[1247]: I0116 02:06:07.837292    1247 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/707b676e-328d-4817-8f7a-7b39051fb2ee-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "707b676e-328d-4817-8f7a-7b39051fb2ee" (UID: "707b676e-328d-4817-8f7a-7b39051fb2ee"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jan 16 02:06:07 addons-321835 kubelet[1247]: I0116 02:06:07.839161    1247 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/707b676e-328d-4817-8f7a-7b39051fb2ee-kube-api-access-h6hbv" (OuterVolumeSpecName: "kube-api-access-h6hbv") pod "707b676e-328d-4817-8f7a-7b39051fb2ee" (UID: "707b676e-328d-4817-8f7a-7b39051fb2ee"). InnerVolumeSpecName "kube-api-access-h6hbv". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jan 16 02:06:07 addons-321835 kubelet[1247]: I0116 02:06:07.935607    1247 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-h6hbv\" (UniqueName: \"kubernetes.io/projected/707b676e-328d-4817-8f7a-7b39051fb2ee-kube-api-access-h6hbv\") on node \"addons-321835\" DevicePath \"\""
	Jan 16 02:06:07 addons-321835 kubelet[1247]: I0116 02:06:07.935645    1247 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/707b676e-328d-4817-8f7a-7b39051fb2ee-webhook-cert\") on node \"addons-321835\" DevicePath \"\""
	Jan 16 02:06:08 addons-321835 kubelet[1247]: I0116 02:06:08.574687    1247 scope.go:117] "RemoveContainer" containerID="d1b89352f93772bd23ded8ffdd76bb7e8598e885226e923888d5292d978d8e79"
	Jan 16 02:06:08 addons-321835 kubelet[1247]: I0116 02:06:08.611277    1247 scope.go:117] "RemoveContainer" containerID="d1b89352f93772bd23ded8ffdd76bb7e8598e885226e923888d5292d978d8e79"
	Jan 16 02:06:08 addons-321835 kubelet[1247]: E0116 02:06:08.611775    1247 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d1b89352f93772bd23ded8ffdd76bb7e8598e885226e923888d5292d978d8e79\": container with ID starting with d1b89352f93772bd23ded8ffdd76bb7e8598e885226e923888d5292d978d8e79 not found: ID does not exist" containerID="d1b89352f93772bd23ded8ffdd76bb7e8598e885226e923888d5292d978d8e79"
	Jan 16 02:06:08 addons-321835 kubelet[1247]: I0116 02:06:08.611859    1247 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d1b89352f93772bd23ded8ffdd76bb7e8598e885226e923888d5292d978d8e79"} err="failed to get container status \"d1b89352f93772bd23ded8ffdd76bb7e8598e885226e923888d5292d978d8e79\": rpc error: code = NotFound desc = could not find container \"d1b89352f93772bd23ded8ffdd76bb7e8598e885226e923888d5292d978d8e79\": container with ID starting with d1b89352f93772bd23ded8ffdd76bb7e8598e885226e923888d5292d978d8e79 not found: ID does not exist"
	Jan 16 02:06:09 addons-321835 kubelet[1247]: I0116 02:06:09.611512    1247 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="707b676e-328d-4817-8f7a-7b39051fb2ee" path="/var/lib/kubelet/pods/707b676e-328d-4817-8f7a-7b39051fb2ee/volumes"
	
	
	==> storage-provisioner [2023ec465ebbecacba60acb2a92941abd467064b72bff41ccbb0725f8534e8a5] <==
	I0116 02:02:01.441180       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0116 02:02:31.480611       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [5ca065a8248d23682515a3eaae7869054410a53e52e95d0cf09e99501ccb8ec0] <==
	I0116 02:02:35.527649       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0116 02:02:35.541583       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0116 02:02:35.541869       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0116 02:02:35.557440       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0116 02:02:35.560979       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-321835_a25e121b-017f-4a7b-b7f1-fb17c8e23c99!
	I0116 02:02:35.577127       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1f473841-a029-47a1-b603-b328c7355b93", APIVersion:"v1", ResourceVersion:"979", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-321835_a25e121b-017f-4a7b-b7f1-fb17c8e23c99 became leader
	I0116 02:02:35.664178       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-321835_a25e121b-017f-4a7b-b7f1-fb17c8e23c99!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-321835 -n addons-321835
helpers_test.go:261: (dbg) Run:  kubectl --context addons-321835 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (152.86s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (154.16s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-321835
addons_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p addons-321835: exit status 82 (2m0.309789731s)

                                                
                                                
-- stdout --
	* Stopping node "addons-321835"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:174: failed to stop minikube. args "out/minikube-linux-amd64 stop -p addons-321835" : exit status 82
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-321835
addons_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-321835: exit status 11 (21.563992112s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.11:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:178: failed to enable dashboard addon: args "out/minikube-linux-amd64 addons enable dashboard -p addons-321835" : exit status 11
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-321835
addons_test.go:180: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-321835: exit status 11 (6.14227348s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.11:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_7b2045b3edf32de99b3c34afdc43bfaabe8aa3c2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:182: failed to disable dashboard addon: args "out/minikube-linux-amd64 addons disable dashboard -p addons-321835" : exit status 11
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-321835
addons_test.go:185: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable gvisor -p addons-321835: exit status 11 (6.145354648s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.11:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_8dd43b2cee45a94e37dbac1dd983966d1c97e7d4_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:187: failed to disable non-enabled addon: args "out/minikube-linux-amd64 addons disable gvisor -p addons-321835" : exit status 11
--- FAIL: TestAddons/StoppedEnableDisable (154.16s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (172.76s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:207: (dbg) Run:  kubectl --context ingress-addon-legacy-473102 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:207: (dbg) Done: kubectl --context ingress-addon-legacy-473102 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (16.35763214s)
addons_test.go:232: (dbg) Run:  kubectl --context ingress-addon-legacy-473102 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context ingress-addon-legacy-473102 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [e16f5d1e-a4fe-4dbc-807c-3f975fc47d17] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [e16f5d1e-a4fe-4dbc-807c-3f975fc47d17] Running
addons_test.go:250: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 10.004393192s
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-473102 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
E0116 02:15:56.342173  978482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/addons-321835/client.crt: no such file or directory
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ingress-addon-legacy-473102 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m9.304502527s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context ingress-addon-legacy-473102 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-473102 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.44
addons_test.go:306: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-473102 addons disable ingress-dns --alsologtostderr -v=1
E0116 02:17:27.512685  978482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/functional-941139/client.crt: no such file or directory
E0116 02:17:27.518008  978482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/functional-941139/client.crt: no such file or directory
E0116 02:17:27.528383  978482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/functional-941139/client.crt: no such file or directory
E0116 02:17:27.548705  978482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/functional-941139/client.crt: no such file or directory
E0116 02:17:27.589034  978482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/functional-941139/client.crt: no such file or directory
E0116 02:17:27.669445  978482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/functional-941139/client.crt: no such file or directory
E0116 02:17:27.829913  978482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/functional-941139/client.crt: no such file or directory
E0116 02:17:28.150525  978482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/functional-941139/client.crt: no such file or directory
E0116 02:17:28.791091  978482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/functional-941139/client.crt: no such file or directory
E0116 02:17:30.071726  978482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/functional-941139/client.crt: no such file or directory
E0116 02:17:32.632205  978482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/functional-941139/client.crt: no such file or directory
addons_test.go:306: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-473102 addons disable ingress-dns --alsologtostderr -v=1: (6.38902494s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-473102 addons disable ingress --alsologtostderr -v=1
E0116 02:17:37.752448  978482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/functional-941139/client.crt: no such file or directory
addons_test.go:311: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-473102 addons disable ingress --alsologtostderr -v=1: (7.571079872s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ingress-addon-legacy-473102 -n ingress-addon-legacy-473102
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-473102 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-473102 logs -n 25: (1.266927763s)
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	
	==> Audit <==
	|----------------|----------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	|    Command     |                           Args                           |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|----------------|----------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| start          | -p functional-941139                                     | functional-941139           | jenkins | v1.32.0 | 16 Jan 24 02:12 UTC |                     |
	|                | --dry-run --alsologtostderr                              |                             |         |         |                     |                     |
	|                | -v=1 --driver=kvm2                                       |                             |         |         |                     |                     |
	|                | --container-runtime=crio                                 |                             |         |         |                     |                     |
	| start          | -p functional-941139                                     | functional-941139           | jenkins | v1.32.0 | 16 Jan 24 02:12 UTC |                     |
	|                | --dry-run --memory                                       |                             |         |         |                     |                     |
	|                | 250MB --alsologtostderr                                  |                             |         |         |                     |                     |
	|                | --driver=kvm2                                            |                             |         |         |                     |                     |
	|                | --container-runtime=crio                                 |                             |         |         |                     |                     |
	| ssh            | functional-941139 ssh sudo cat                           | functional-941139           | jenkins | v1.32.0 | 16 Jan 24 02:12 UTC | 16 Jan 24 02:12 UTC |
	|                | /etc/test/nested/copy/978482/hosts                       |                             |         |         |                     |                     |
	| image          | functional-941139 image ls                               | functional-941139           | jenkins | v1.32.0 | 16 Jan 24 02:12 UTC | 16 Jan 24 02:12 UTC |
	| image          | functional-941139 image save --daemon                    | functional-941139           | jenkins | v1.32.0 | 16 Jan 24 02:12 UTC | 16 Jan 24 02:12 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-941139 |                             |         |         |                     |                     |
	|                | --alsologtostderr                                        |                             |         |         |                     |                     |
	| dashboard      | --url --port 36195                                       | functional-941139           | jenkins | v1.32.0 | 16 Jan 24 02:12 UTC | 16 Jan 24 02:13 UTC |
	|                | -p functional-941139                                     |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                   |                             |         |         |                     |                     |
	| service        | functional-941139 service                                | functional-941139           | jenkins | v1.32.0 | 16 Jan 24 02:12 UTC | 16 Jan 24 02:12 UTC |
	|                | hello-node-connect --url                                 |                             |         |         |                     |                     |
	| update-context | functional-941139                                        | functional-941139           | jenkins | v1.32.0 | 16 Jan 24 02:12 UTC | 16 Jan 24 02:12 UTC |
	|                | update-context                                           |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                   |                             |         |         |                     |                     |
	| update-context | functional-941139                                        | functional-941139           | jenkins | v1.32.0 | 16 Jan 24 02:12 UTC | 16 Jan 24 02:12 UTC |
	|                | update-context                                           |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                   |                             |         |         |                     |                     |
	| update-context | functional-941139                                        | functional-941139           | jenkins | v1.32.0 | 16 Jan 24 02:12 UTC | 16 Jan 24 02:12 UTC |
	|                | update-context                                           |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                   |                             |         |         |                     |                     |
	| image          | functional-941139                                        | functional-941139           | jenkins | v1.32.0 | 16 Jan 24 02:12 UTC | 16 Jan 24 02:12 UTC |
	|                | image ls --format short                                  |                             |         |         |                     |                     |
	|                | --alsologtostderr                                        |                             |         |         |                     |                     |
	| image          | functional-941139                                        | functional-941139           | jenkins | v1.32.0 | 16 Jan 24 02:12 UTC | 16 Jan 24 02:12 UTC |
	|                | image ls --format yaml                                   |                             |         |         |                     |                     |
	|                | --alsologtostderr                                        |                             |         |         |                     |                     |
	| ssh            | functional-941139 ssh pgrep                              | functional-941139           | jenkins | v1.32.0 | 16 Jan 24 02:12 UTC |                     |
	|                | buildkitd                                                |                             |         |         |                     |                     |
	| image          | functional-941139 image build -t                         | functional-941139           | jenkins | v1.32.0 | 16 Jan 24 02:12 UTC | 16 Jan 24 02:13 UTC |
	|                | localhost/my-image:functional-941139                     |                             |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                         |                             |         |         |                     |                     |
	| image          | functional-941139 image ls                               | functional-941139           | jenkins | v1.32.0 | 16 Jan 24 02:13 UTC | 16 Jan 24 02:13 UTC |
	| image          | functional-941139                                        | functional-941139           | jenkins | v1.32.0 | 16 Jan 24 02:13 UTC | 16 Jan 24 02:13 UTC |
	|                | image ls --format json                                   |                             |         |         |                     |                     |
	|                | --alsologtostderr                                        |                             |         |         |                     |                     |
	| image          | functional-941139                                        | functional-941139           | jenkins | v1.32.0 | 16 Jan 24 02:13 UTC | 16 Jan 24 02:13 UTC |
	|                | image ls --format table                                  |                             |         |         |                     |                     |
	|                | --alsologtostderr                                        |                             |         |         |                     |                     |
	| delete         | -p functional-941139                                     | functional-941139           | jenkins | v1.32.0 | 16 Jan 24 02:13 UTC | 16 Jan 24 02:13 UTC |
	| start          | -p ingress-addon-legacy-473102                           | ingress-addon-legacy-473102 | jenkins | v1.32.0 | 16 Jan 24 02:13 UTC | 16 Jan 24 02:14 UTC |
	|                | --kubernetes-version=v1.18.20                            |                             |         |         |                     |                     |
	|                | --memory=4096 --wait=true                                |                             |         |         |                     |                     |
	|                | --alsologtostderr                                        |                             |         |         |                     |                     |
	|                | -v=5 --driver=kvm2                                       |                             |         |         |                     |                     |
	|                | --container-runtime=crio                                 |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-473102                              | ingress-addon-legacy-473102 | jenkins | v1.32.0 | 16 Jan 24 02:14 UTC | 16 Jan 24 02:14 UTC |
	|                | addons enable ingress                                    |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                                   |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-473102                              | ingress-addon-legacy-473102 | jenkins | v1.32.0 | 16 Jan 24 02:14 UTC | 16 Jan 24 02:14 UTC |
	|                | addons enable ingress-dns                                |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                                   |                             |         |         |                     |                     |
	| ssh            | ingress-addon-legacy-473102                              | ingress-addon-legacy-473102 | jenkins | v1.32.0 | 16 Jan 24 02:15 UTC |                     |
	|                | ssh curl -s http://127.0.0.1/                            |                             |         |         |                     |                     |
	|                | -H 'Host: nginx.example.com'                             |                             |         |         |                     |                     |
	| ip             | ingress-addon-legacy-473102 ip                           | ingress-addon-legacy-473102 | jenkins | v1.32.0 | 16 Jan 24 02:17 UTC | 16 Jan 24 02:17 UTC |
	| addons         | ingress-addon-legacy-473102                              | ingress-addon-legacy-473102 | jenkins | v1.32.0 | 16 Jan 24 02:17 UTC | 16 Jan 24 02:17 UTC |
	|                | addons disable ingress-dns                               |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                   |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-473102                              | ingress-addon-legacy-473102 | jenkins | v1.32.0 | 16 Jan 24 02:17 UTC | 16 Jan 24 02:17 UTC |
	|                | addons disable ingress                                   |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                   |                             |         |         |                     |                     |
	|----------------|----------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/16 02:13:18
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0116 02:13:18.878919  987236 out.go:296] Setting OutFile to fd 1 ...
	I0116 02:13:18.879054  987236 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 02:13:18.879065  987236 out.go:309] Setting ErrFile to fd 2...
	I0116 02:13:18.879083  987236 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 02:13:18.879334  987236 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17967-971255/.minikube/bin
	I0116 02:13:18.879999  987236 out.go:303] Setting JSON to false
	I0116 02:13:18.880994  987236 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":10548,"bootTime":1705360651,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1048-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0116 02:13:18.881111  987236 start.go:138] virtualization: kvm guest
	I0116 02:13:18.883799  987236 out.go:177] * [ingress-addon-legacy-473102] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0116 02:13:18.885814  987236 out.go:177]   - MINIKUBE_LOCATION=17967
	I0116 02:13:18.885814  987236 notify.go:220] Checking for updates...
	I0116 02:13:18.887391  987236 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0116 02:13:18.889330  987236 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17967-971255/kubeconfig
	I0116 02:13:18.891165  987236 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17967-971255/.minikube
	I0116 02:13:18.892918  987236 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0116 02:13:18.894577  987236 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0116 02:13:18.896596  987236 driver.go:392] Setting default libvirt URI to qemu:///system
	I0116 02:13:18.932235  987236 out.go:177] * Using the kvm2 driver based on user configuration
	I0116 02:13:18.933645  987236 start.go:298] selected driver: kvm2
	I0116 02:13:18.933669  987236 start.go:902] validating driver "kvm2" against <nil>
	I0116 02:13:18.933684  987236 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0116 02:13:18.934442  987236 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0116 02:13:18.934535  987236 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17967-971255/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0116 02:13:18.949414  987236 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0116 02:13:18.949473  987236 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0116 02:13:18.949690  987236 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0116 02:13:18.949762  987236 cni.go:84] Creating CNI manager for ""
	I0116 02:13:18.949775  987236 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 02:13:18.949784  987236 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0116 02:13:18.949792  987236 start_flags.go:321] config:
	{Name:ingress-addon-legacy-473102 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-473102 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 02:13:18.949986  987236 iso.go:125] acquiring lock: {Name:mkc0ce0b4b435c5fb7570521b794e5982bb7bf6d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0116 02:13:18.952226  987236 out.go:177] * Starting control plane node ingress-addon-legacy-473102 in cluster ingress-addon-legacy-473102
	I0116 02:13:18.953717  987236 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0116 02:13:18.976561  987236 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4
	I0116 02:13:18.976600  987236 cache.go:56] Caching tarball of preloaded images
	I0116 02:13:18.976788  987236 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0116 02:13:18.978816  987236 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0116 02:13:18.980187  987236 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I0116 02:13:19.005059  987236 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4?checksum=md5:0d02e096853189c5b37812b400898e14 -> /home/jenkins/minikube-integration/17967-971255/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4
	I0116 02:13:22.532951  987236 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I0116 02:13:22.533054  987236 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17967-971255/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I0116 02:13:23.677624  987236 cache.go:59] Finished verifying existence of preloaded tar for  v1.18.20 on crio
	I0116 02:13:23.678010  987236 profile.go:148] Saving config to /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/ingress-addon-legacy-473102/config.json ...
	I0116 02:13:23.678044  987236 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/ingress-addon-legacy-473102/config.json: {Name:mk1602394dedc10eff5c025684448c1c271982a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:13:23.678268  987236 start.go:365] acquiring machines lock for ingress-addon-legacy-473102: {Name:mk61734672272adcae1bdaf20e72828e69d219db Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0116 02:13:23.678313  987236 start.go:369] acquired machines lock for "ingress-addon-legacy-473102" in 22.177µs
	I0116 02:13:23.678338  987236 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-473102 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Ku
bernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-473102 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0116 02:13:23.678419  987236 start.go:125] createHost starting for "" (driver="kvm2")
	I0116 02:13:23.680862  987236 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I0116 02:13:23.681027  987236 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 02:13:23.681062  987236 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:13:23.695651  987236 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38853
	I0116 02:13:23.696169  987236 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:13:23.696806  987236 main.go:141] libmachine: Using API Version  1
	I0116 02:13:23.696827  987236 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:13:23.697153  987236 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:13:23.697352  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) Calling .GetMachineName
	I0116 02:13:23.697475  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) Calling .DriverName
	I0116 02:13:23.697701  987236 start.go:159] libmachine.API.Create for "ingress-addon-legacy-473102" (driver="kvm2")
	I0116 02:13:23.697729  987236 client.go:168] LocalClient.Create starting
	I0116 02:13:23.697769  987236 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca.pem
	I0116 02:13:23.697831  987236 main.go:141] libmachine: Decoding PEM data...
	I0116 02:13:23.697855  987236 main.go:141] libmachine: Parsing certificate...
	I0116 02:13:23.697922  987236 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17967-971255/.minikube/certs/cert.pem
	I0116 02:13:23.697953  987236 main.go:141] libmachine: Decoding PEM data...
	I0116 02:13:23.697974  987236 main.go:141] libmachine: Parsing certificate...
	I0116 02:13:23.698003  987236 main.go:141] libmachine: Running pre-create checks...
	I0116 02:13:23.698019  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) Calling .PreCreateCheck
	I0116 02:13:23.698371  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) Calling .GetConfigRaw
	I0116 02:13:23.698761  987236 main.go:141] libmachine: Creating machine...
	I0116 02:13:23.698778  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) Calling .Create
	I0116 02:13:23.698930  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) Creating KVM machine...
	I0116 02:13:23.700264  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) DBG | found existing default KVM network
	I0116 02:13:23.701001  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) DBG | I0116 02:13:23.700843  987270 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00011d1e0}
	I0116 02:13:23.706524  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) DBG | trying to create private KVM network mk-ingress-addon-legacy-473102 192.168.39.0/24...
	I0116 02:13:23.778285  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) Setting up store path in /home/jenkins/minikube-integration/17967-971255/.minikube/machines/ingress-addon-legacy-473102 ...
	I0116 02:13:23.778318  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) DBG | private KVM network mk-ingress-addon-legacy-473102 192.168.39.0/24 created
	I0116 02:13:23.778332  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) Building disk image from file:///home/jenkins/minikube-integration/17967-971255/.minikube/cache/iso/amd64/minikube-v1.32.1-1703784139-17866-amd64.iso
	I0116 02:13:23.778360  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) DBG | I0116 02:13:23.778222  987270 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17967-971255/.minikube
	I0116 02:13:23.778449  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) Downloading /home/jenkins/minikube-integration/17967-971255/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17967-971255/.minikube/cache/iso/amd64/minikube-v1.32.1-1703784139-17866-amd64.iso...
	I0116 02:13:24.019486  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) DBG | I0116 02:13:24.019336  987270 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17967-971255/.minikube/machines/ingress-addon-legacy-473102/id_rsa...
	I0116 02:13:24.230809  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) DBG | I0116 02:13:24.230621  987270 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17967-971255/.minikube/machines/ingress-addon-legacy-473102/ingress-addon-legacy-473102.rawdisk...
	I0116 02:13:24.230850  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) DBG | Writing magic tar header
	I0116 02:13:24.230875  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) DBG | Writing SSH key tar header
	I0116 02:13:24.230890  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) DBG | I0116 02:13:24.230765  987270 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17967-971255/.minikube/machines/ingress-addon-legacy-473102 ...
	I0116 02:13:24.230922  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) Setting executable bit set on /home/jenkins/minikube-integration/17967-971255/.minikube/machines/ingress-addon-legacy-473102 (perms=drwx------)
	I0116 02:13:24.230942  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17967-971255/.minikube/machines/ingress-addon-legacy-473102
	I0116 02:13:24.230953  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) Setting executable bit set on /home/jenkins/minikube-integration/17967-971255/.minikube/machines (perms=drwxr-xr-x)
	I0116 02:13:24.230971  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) Setting executable bit set on /home/jenkins/minikube-integration/17967-971255/.minikube (perms=drwxr-xr-x)
	I0116 02:13:24.230980  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) Setting executable bit set on /home/jenkins/minikube-integration/17967-971255 (perms=drwxrwxr-x)
	I0116 02:13:24.230993  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0116 02:13:24.231018  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0116 02:13:24.231043  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17967-971255/.minikube/machines
	I0116 02:13:24.231066  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17967-971255/.minikube
	I0116 02:13:24.231073  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) Creating domain...
	I0116 02:13:24.231083  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17967-971255
	I0116 02:13:24.231091  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0116 02:13:24.231099  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) DBG | Checking permissions on dir: /home/jenkins
	I0116 02:13:24.231107  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) DBG | Checking permissions on dir: /home
	I0116 02:13:24.231115  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) DBG | Skipping /home - not owner
	I0116 02:13:24.232322  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) define libvirt domain using xml: 
	I0116 02:13:24.232348  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) <domain type='kvm'>
	I0116 02:13:24.232362  987236 main.go:141] libmachine: (ingress-addon-legacy-473102)   <name>ingress-addon-legacy-473102</name>
	I0116 02:13:24.232371  987236 main.go:141] libmachine: (ingress-addon-legacy-473102)   <memory unit='MiB'>4096</memory>
	I0116 02:13:24.232382  987236 main.go:141] libmachine: (ingress-addon-legacy-473102)   <vcpu>2</vcpu>
	I0116 02:13:24.232405  987236 main.go:141] libmachine: (ingress-addon-legacy-473102)   <features>
	I0116 02:13:24.232423  987236 main.go:141] libmachine: (ingress-addon-legacy-473102)     <acpi/>
	I0116 02:13:24.232432  987236 main.go:141] libmachine: (ingress-addon-legacy-473102)     <apic/>
	I0116 02:13:24.232444  987236 main.go:141] libmachine: (ingress-addon-legacy-473102)     <pae/>
	I0116 02:13:24.232452  987236 main.go:141] libmachine: (ingress-addon-legacy-473102)     
	I0116 02:13:24.232459  987236 main.go:141] libmachine: (ingress-addon-legacy-473102)   </features>
	I0116 02:13:24.232468  987236 main.go:141] libmachine: (ingress-addon-legacy-473102)   <cpu mode='host-passthrough'>
	I0116 02:13:24.232477  987236 main.go:141] libmachine: (ingress-addon-legacy-473102)   
	I0116 02:13:24.232489  987236 main.go:141] libmachine: (ingress-addon-legacy-473102)   </cpu>
	I0116 02:13:24.232503  987236 main.go:141] libmachine: (ingress-addon-legacy-473102)   <os>
	I0116 02:13:24.232520  987236 main.go:141] libmachine: (ingress-addon-legacy-473102)     <type>hvm</type>
	I0116 02:13:24.232531  987236 main.go:141] libmachine: (ingress-addon-legacy-473102)     <boot dev='cdrom'/>
	I0116 02:13:24.232539  987236 main.go:141] libmachine: (ingress-addon-legacy-473102)     <boot dev='hd'/>
	I0116 02:13:24.232547  987236 main.go:141] libmachine: (ingress-addon-legacy-473102)     <bootmenu enable='no'/>
	I0116 02:13:24.232559  987236 main.go:141] libmachine: (ingress-addon-legacy-473102)   </os>
	I0116 02:13:24.232571  987236 main.go:141] libmachine: (ingress-addon-legacy-473102)   <devices>
	I0116 02:13:24.232584  987236 main.go:141] libmachine: (ingress-addon-legacy-473102)     <disk type='file' device='cdrom'>
	I0116 02:13:24.232604  987236 main.go:141] libmachine: (ingress-addon-legacy-473102)       <source file='/home/jenkins/minikube-integration/17967-971255/.minikube/machines/ingress-addon-legacy-473102/boot2docker.iso'/>
	I0116 02:13:24.232617  987236 main.go:141] libmachine: (ingress-addon-legacy-473102)       <target dev='hdc' bus='scsi'/>
	I0116 02:13:24.232630  987236 main.go:141] libmachine: (ingress-addon-legacy-473102)       <readonly/>
	I0116 02:13:24.232643  987236 main.go:141] libmachine: (ingress-addon-legacy-473102)     </disk>
	I0116 02:13:24.232658  987236 main.go:141] libmachine: (ingress-addon-legacy-473102)     <disk type='file' device='disk'>
	I0116 02:13:24.232673  987236 main.go:141] libmachine: (ingress-addon-legacy-473102)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0116 02:13:24.232693  987236 main.go:141] libmachine: (ingress-addon-legacy-473102)       <source file='/home/jenkins/minikube-integration/17967-971255/.minikube/machines/ingress-addon-legacy-473102/ingress-addon-legacy-473102.rawdisk'/>
	I0116 02:13:24.232707  987236 main.go:141] libmachine: (ingress-addon-legacy-473102)       <target dev='hda' bus='virtio'/>
	I0116 02:13:24.232721  987236 main.go:141] libmachine: (ingress-addon-legacy-473102)     </disk>
	I0116 02:13:24.232733  987236 main.go:141] libmachine: (ingress-addon-legacy-473102)     <interface type='network'>
	I0116 02:13:24.232748  987236 main.go:141] libmachine: (ingress-addon-legacy-473102)       <source network='mk-ingress-addon-legacy-473102'/>
	I0116 02:13:24.232764  987236 main.go:141] libmachine: (ingress-addon-legacy-473102)       <model type='virtio'/>
	I0116 02:13:24.232795  987236 main.go:141] libmachine: (ingress-addon-legacy-473102)     </interface>
	I0116 02:13:24.232814  987236 main.go:141] libmachine: (ingress-addon-legacy-473102)     <interface type='network'>
	I0116 02:13:24.232826  987236 main.go:141] libmachine: (ingress-addon-legacy-473102)       <source network='default'/>
	I0116 02:13:24.232837  987236 main.go:141] libmachine: (ingress-addon-legacy-473102)       <model type='virtio'/>
	I0116 02:13:24.232861  987236 main.go:141] libmachine: (ingress-addon-legacy-473102)     </interface>
	I0116 02:13:24.232874  987236 main.go:141] libmachine: (ingress-addon-legacy-473102)     <serial type='pty'>
	I0116 02:13:24.232887  987236 main.go:141] libmachine: (ingress-addon-legacy-473102)       <target port='0'/>
	I0116 02:13:24.232915  987236 main.go:141] libmachine: (ingress-addon-legacy-473102)     </serial>
	I0116 02:13:24.232931  987236 main.go:141] libmachine: (ingress-addon-legacy-473102)     <console type='pty'>
	I0116 02:13:24.232944  987236 main.go:141] libmachine: (ingress-addon-legacy-473102)       <target type='serial' port='0'/>
	I0116 02:13:24.232957  987236 main.go:141] libmachine: (ingress-addon-legacy-473102)     </console>
	I0116 02:13:24.232970  987236 main.go:141] libmachine: (ingress-addon-legacy-473102)     <rng model='virtio'>
	I0116 02:13:24.232993  987236 main.go:141] libmachine: (ingress-addon-legacy-473102)       <backend model='random'>/dev/random</backend>
	I0116 02:13:24.233011  987236 main.go:141] libmachine: (ingress-addon-legacy-473102)     </rng>
	I0116 02:13:24.233023  987236 main.go:141] libmachine: (ingress-addon-legacy-473102)     
	I0116 02:13:24.233032  987236 main.go:141] libmachine: (ingress-addon-legacy-473102)     
	I0116 02:13:24.233045  987236 main.go:141] libmachine: (ingress-addon-legacy-473102)   </devices>
	I0116 02:13:24.233053  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) </domain>
	I0116 02:13:24.233061  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) 
	I0116 02:13:24.237527  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) DBG | domain ingress-addon-legacy-473102 has defined MAC address 52:54:00:97:e5:68 in network default
	I0116 02:13:24.238075  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) DBG | domain ingress-addon-legacy-473102 has defined MAC address 52:54:00:32:65:d2 in network mk-ingress-addon-legacy-473102
	I0116 02:13:24.238093  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) Ensuring networks are active...
	I0116 02:13:24.239044  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) Ensuring network default is active
	I0116 02:13:24.239439  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) Ensuring network mk-ingress-addon-legacy-473102 is active
	I0116 02:13:24.240087  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) Getting domain xml...
	I0116 02:13:24.240888  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) Creating domain...
	I0116 02:13:25.462572  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) Waiting to get IP...
	I0116 02:13:25.463500  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) DBG | domain ingress-addon-legacy-473102 has defined MAC address 52:54:00:32:65:d2 in network mk-ingress-addon-legacy-473102
	I0116 02:13:25.463903  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) DBG | unable to find current IP address of domain ingress-addon-legacy-473102 in network mk-ingress-addon-legacy-473102
	I0116 02:13:25.463968  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) DBG | I0116 02:13:25.463893  987270 retry.go:31] will retry after 226.950232ms: waiting for machine to come up
	I0116 02:13:25.692481  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) DBG | domain ingress-addon-legacy-473102 has defined MAC address 52:54:00:32:65:d2 in network mk-ingress-addon-legacy-473102
	I0116 02:13:25.693064  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) DBG | unable to find current IP address of domain ingress-addon-legacy-473102 in network mk-ingress-addon-legacy-473102
	I0116 02:13:25.693097  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) DBG | I0116 02:13:25.692988  987270 retry.go:31] will retry after 363.772043ms: waiting for machine to come up
	I0116 02:13:26.058763  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) DBG | domain ingress-addon-legacy-473102 has defined MAC address 52:54:00:32:65:d2 in network mk-ingress-addon-legacy-473102
	I0116 02:13:26.059345  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) DBG | unable to find current IP address of domain ingress-addon-legacy-473102 in network mk-ingress-addon-legacy-473102
	I0116 02:13:26.059376  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) DBG | I0116 02:13:26.059269  987270 retry.go:31] will retry after 420.462775ms: waiting for machine to come up
	I0116 02:13:26.480916  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) DBG | domain ingress-addon-legacy-473102 has defined MAC address 52:54:00:32:65:d2 in network mk-ingress-addon-legacy-473102
	I0116 02:13:26.481419  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) DBG | unable to find current IP address of domain ingress-addon-legacy-473102 in network mk-ingress-addon-legacy-473102
	I0116 02:13:26.481451  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) DBG | I0116 02:13:26.481350  987270 retry.go:31] will retry after 541.562749ms: waiting for machine to come up
	I0116 02:13:27.024349  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) DBG | domain ingress-addon-legacy-473102 has defined MAC address 52:54:00:32:65:d2 in network mk-ingress-addon-legacy-473102
	I0116 02:13:27.024850  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) DBG | unable to find current IP address of domain ingress-addon-legacy-473102 in network mk-ingress-addon-legacy-473102
	I0116 02:13:27.024886  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) DBG | I0116 02:13:27.024781  987270 retry.go:31] will retry after 759.442991ms: waiting for machine to come up
	I0116 02:13:27.785850  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) DBG | domain ingress-addon-legacy-473102 has defined MAC address 52:54:00:32:65:d2 in network mk-ingress-addon-legacy-473102
	I0116 02:13:27.786303  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) DBG | unable to find current IP address of domain ingress-addon-legacy-473102 in network mk-ingress-addon-legacy-473102
	I0116 02:13:27.786332  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) DBG | I0116 02:13:27.786249  987270 retry.go:31] will retry after 707.891249ms: waiting for machine to come up
	I0116 02:13:28.495614  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) DBG | domain ingress-addon-legacy-473102 has defined MAC address 52:54:00:32:65:d2 in network mk-ingress-addon-legacy-473102
	I0116 02:13:28.496103  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) DBG | unable to find current IP address of domain ingress-addon-legacy-473102 in network mk-ingress-addon-legacy-473102
	I0116 02:13:28.496133  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) DBG | I0116 02:13:28.496050  987270 retry.go:31] will retry after 1.187605009s: waiting for machine to come up
	I0116 02:13:29.684862  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) DBG | domain ingress-addon-legacy-473102 has defined MAC address 52:54:00:32:65:d2 in network mk-ingress-addon-legacy-473102
	I0116 02:13:29.685271  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) DBG | unable to find current IP address of domain ingress-addon-legacy-473102 in network mk-ingress-addon-legacy-473102
	I0116 02:13:29.685301  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) DBG | I0116 02:13:29.685221  987270 retry.go:31] will retry after 1.244123845s: waiting for machine to come up
	I0116 02:13:30.931795  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) DBG | domain ingress-addon-legacy-473102 has defined MAC address 52:54:00:32:65:d2 in network mk-ingress-addon-legacy-473102
	I0116 02:13:30.932206  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) DBG | unable to find current IP address of domain ingress-addon-legacy-473102 in network mk-ingress-addon-legacy-473102
	I0116 02:13:30.932242  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) DBG | I0116 02:13:30.932148  987270 retry.go:31] will retry after 1.17839034s: waiting for machine to come up
	I0116 02:13:32.112607  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) DBG | domain ingress-addon-legacy-473102 has defined MAC address 52:54:00:32:65:d2 in network mk-ingress-addon-legacy-473102
	I0116 02:13:32.113167  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) DBG | unable to find current IP address of domain ingress-addon-legacy-473102 in network mk-ingress-addon-legacy-473102
	I0116 02:13:32.113192  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) DBG | I0116 02:13:32.113106  987270 retry.go:31] will retry after 1.802959822s: waiting for machine to come up
	I0116 02:13:33.917942  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) DBG | domain ingress-addon-legacy-473102 has defined MAC address 52:54:00:32:65:d2 in network mk-ingress-addon-legacy-473102
	I0116 02:13:33.918605  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) DBG | unable to find current IP address of domain ingress-addon-legacy-473102 in network mk-ingress-addon-legacy-473102
	I0116 02:13:33.918642  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) DBG | I0116 02:13:33.918525  987270 retry.go:31] will retry after 2.438439551s: waiting for machine to come up
	I0116 02:13:36.358254  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) DBG | domain ingress-addon-legacy-473102 has defined MAC address 52:54:00:32:65:d2 in network mk-ingress-addon-legacy-473102
	I0116 02:13:36.358747  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) DBG | unable to find current IP address of domain ingress-addon-legacy-473102 in network mk-ingress-addon-legacy-473102
	I0116 02:13:36.358780  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) DBG | I0116 02:13:36.358685  987270 retry.go:31] will retry after 2.598355212s: waiting for machine to come up
	I0116 02:13:38.958721  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) DBG | domain ingress-addon-legacy-473102 has defined MAC address 52:54:00:32:65:d2 in network mk-ingress-addon-legacy-473102
	I0116 02:13:38.959073  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) DBG | unable to find current IP address of domain ingress-addon-legacy-473102 in network mk-ingress-addon-legacy-473102
	I0116 02:13:38.959129  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) DBG | I0116 02:13:38.959037  987270 retry.go:31] will retry after 4.485485844s: waiting for machine to come up
	I0116 02:13:43.449671  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) DBG | domain ingress-addon-legacy-473102 has defined MAC address 52:54:00:32:65:d2 in network mk-ingress-addon-legacy-473102
	I0116 02:13:43.449921  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) DBG | unable to find current IP address of domain ingress-addon-legacy-473102 in network mk-ingress-addon-legacy-473102
	I0116 02:13:43.449953  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) DBG | I0116 02:13:43.449868  987270 retry.go:31] will retry after 4.558554757s: waiting for machine to come up
	I0116 02:13:48.012960  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) DBG | domain ingress-addon-legacy-473102 has defined MAC address 52:54:00:32:65:d2 in network mk-ingress-addon-legacy-473102
	I0116 02:13:48.013516  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) Found IP for machine: 192.168.39.44
	I0116 02:13:48.013611  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) DBG | domain ingress-addon-legacy-473102 has current primary IP address 192.168.39.44 and MAC address 52:54:00:32:65:d2 in network mk-ingress-addon-legacy-473102
	I0116 02:13:48.013645  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) Reserving static IP address...
	I0116 02:13:48.014023  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) DBG | unable to find host DHCP lease matching {name: "ingress-addon-legacy-473102", mac: "52:54:00:32:65:d2", ip: "192.168.39.44"} in network mk-ingress-addon-legacy-473102
	I0116 02:13:48.090195  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) DBG | Getting to WaitForSSH function...
	I0116 02:13:48.090244  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) Reserved static IP address: 192.168.39.44
	I0116 02:13:48.090260  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) Waiting for SSH to be available...
	I0116 02:13:48.092971  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) DBG | domain ingress-addon-legacy-473102 has defined MAC address 52:54:00:32:65:d2 in network mk-ingress-addon-legacy-473102
	I0116 02:13:48.093406  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:65:d2", ip: ""} in network mk-ingress-addon-legacy-473102: {Iface:virbr1 ExpiryTime:2024-01-16 03:13:39 +0000 UTC Type:0 Mac:52:54:00:32:65:d2 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:minikube Clientid:01:52:54:00:32:65:d2}
	I0116 02:13:48.093434  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) DBG | domain ingress-addon-legacy-473102 has defined IP address 192.168.39.44 and MAC address 52:54:00:32:65:d2 in network mk-ingress-addon-legacy-473102
	I0116 02:13:48.093606  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) DBG | Using SSH client type: external
	I0116 02:13:48.093633  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) DBG | Using SSH private key: /home/jenkins/minikube-integration/17967-971255/.minikube/machines/ingress-addon-legacy-473102/id_rsa (-rw-------)
	I0116 02:13:48.093671  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.44 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17967-971255/.minikube/machines/ingress-addon-legacy-473102/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0116 02:13:48.093691  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) DBG | About to run SSH command:
	I0116 02:13:48.093710  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) DBG | exit 0
	I0116 02:13:48.181843  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) DBG | SSH cmd err, output: <nil>: 
	I0116 02:13:48.182156  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) KVM machine creation complete!
	I0116 02:13:48.182537  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) Calling .GetConfigRaw
	I0116 02:13:48.183172  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) Calling .DriverName
	I0116 02:13:48.183410  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) Calling .DriverName
	I0116 02:13:48.183598  987236 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0116 02:13:48.183614  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) Calling .GetState
	I0116 02:13:48.184914  987236 main.go:141] libmachine: Detecting operating system of created instance...
	I0116 02:13:48.184935  987236 main.go:141] libmachine: Waiting for SSH to be available...
	I0116 02:13:48.184945  987236 main.go:141] libmachine: Getting to WaitForSSH function...
	I0116 02:13:48.184954  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) Calling .GetSSHHostname
	I0116 02:13:48.187239  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) DBG | domain ingress-addon-legacy-473102 has defined MAC address 52:54:00:32:65:d2 in network mk-ingress-addon-legacy-473102
	I0116 02:13:48.187597  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:65:d2", ip: ""} in network mk-ingress-addon-legacy-473102: {Iface:virbr1 ExpiryTime:2024-01-16 03:13:39 +0000 UTC Type:0 Mac:52:54:00:32:65:d2 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ingress-addon-legacy-473102 Clientid:01:52:54:00:32:65:d2}
	I0116 02:13:48.187626  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) DBG | domain ingress-addon-legacy-473102 has defined IP address 192.168.39.44 and MAC address 52:54:00:32:65:d2 in network mk-ingress-addon-legacy-473102
	I0116 02:13:48.187775  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) Calling .GetSSHPort
	I0116 02:13:48.187965  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) Calling .GetSSHKeyPath
	I0116 02:13:48.188109  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) Calling .GetSSHKeyPath
	I0116 02:13:48.188238  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) Calling .GetSSHUsername
	I0116 02:13:48.188427  987236 main.go:141] libmachine: Using SSH client type: native
	I0116 02:13:48.188808  987236 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.44 22 <nil> <nil>}
	I0116 02:13:48.188822  987236 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0116 02:13:48.301625  987236 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0116 02:13:48.301659  987236 main.go:141] libmachine: Detecting the provisioner...
	I0116 02:13:48.301668  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) Calling .GetSSHHostname
	I0116 02:13:48.304650  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) DBG | domain ingress-addon-legacy-473102 has defined MAC address 52:54:00:32:65:d2 in network mk-ingress-addon-legacy-473102
	I0116 02:13:48.305001  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:65:d2", ip: ""} in network mk-ingress-addon-legacy-473102: {Iface:virbr1 ExpiryTime:2024-01-16 03:13:39 +0000 UTC Type:0 Mac:52:54:00:32:65:d2 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ingress-addon-legacy-473102 Clientid:01:52:54:00:32:65:d2}
	I0116 02:13:48.305038  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) DBG | domain ingress-addon-legacy-473102 has defined IP address 192.168.39.44 and MAC address 52:54:00:32:65:d2 in network mk-ingress-addon-legacy-473102
	I0116 02:13:48.305178  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) Calling .GetSSHPort
	I0116 02:13:48.305465  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) Calling .GetSSHKeyPath
	I0116 02:13:48.305648  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) Calling .GetSSHKeyPath
	I0116 02:13:48.305819  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) Calling .GetSSHUsername
	I0116 02:13:48.306008  987236 main.go:141] libmachine: Using SSH client type: native
	I0116 02:13:48.306341  987236 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.44 22 <nil> <nil>}
	I0116 02:13:48.306353  987236 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0116 02:13:48.418879  987236 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g19d536a-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I0116 02:13:48.418977  987236 main.go:141] libmachine: found compatible host: buildroot
	I0116 02:13:48.418989  987236 main.go:141] libmachine: Provisioning with buildroot...
	I0116 02:13:48.418998  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) Calling .GetMachineName
	I0116 02:13:48.419282  987236 buildroot.go:166] provisioning hostname "ingress-addon-legacy-473102"
	I0116 02:13:48.419315  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) Calling .GetMachineName
	I0116 02:13:48.419598  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) Calling .GetSSHHostname
	I0116 02:13:48.422751  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) DBG | domain ingress-addon-legacy-473102 has defined MAC address 52:54:00:32:65:d2 in network mk-ingress-addon-legacy-473102
	I0116 02:13:48.423068  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:65:d2", ip: ""} in network mk-ingress-addon-legacy-473102: {Iface:virbr1 ExpiryTime:2024-01-16 03:13:39 +0000 UTC Type:0 Mac:52:54:00:32:65:d2 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ingress-addon-legacy-473102 Clientid:01:52:54:00:32:65:d2}
	I0116 02:13:48.423104  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) DBG | domain ingress-addon-legacy-473102 has defined IP address 192.168.39.44 and MAC address 52:54:00:32:65:d2 in network mk-ingress-addon-legacy-473102
	I0116 02:13:48.423269  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) Calling .GetSSHPort
	I0116 02:13:48.423515  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) Calling .GetSSHKeyPath
	I0116 02:13:48.423667  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) Calling .GetSSHKeyPath
	I0116 02:13:48.423842  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) Calling .GetSSHUsername
	I0116 02:13:48.423986  987236 main.go:141] libmachine: Using SSH client type: native
	I0116 02:13:48.424318  987236 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.44 22 <nil> <nil>}
	I0116 02:13:48.424339  987236 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-473102 && echo "ingress-addon-legacy-473102" | sudo tee /etc/hostname
	I0116 02:13:48.550644  987236 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-473102
	
	I0116 02:13:48.550679  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) Calling .GetSSHHostname
	I0116 02:13:48.553336  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) DBG | domain ingress-addon-legacy-473102 has defined MAC address 52:54:00:32:65:d2 in network mk-ingress-addon-legacy-473102
	I0116 02:13:48.553771  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:65:d2", ip: ""} in network mk-ingress-addon-legacy-473102: {Iface:virbr1 ExpiryTime:2024-01-16 03:13:39 +0000 UTC Type:0 Mac:52:54:00:32:65:d2 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ingress-addon-legacy-473102 Clientid:01:52:54:00:32:65:d2}
	I0116 02:13:48.553824  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) DBG | domain ingress-addon-legacy-473102 has defined IP address 192.168.39.44 and MAC address 52:54:00:32:65:d2 in network mk-ingress-addon-legacy-473102
	I0116 02:13:48.554046  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) Calling .GetSSHPort
	I0116 02:13:48.554291  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) Calling .GetSSHKeyPath
	I0116 02:13:48.554501  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) Calling .GetSSHKeyPath
	I0116 02:13:48.554642  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) Calling .GetSSHUsername
	I0116 02:13:48.554836  987236 main.go:141] libmachine: Using SSH client type: native
	I0116 02:13:48.555167  987236 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.44 22 <nil> <nil>}
	I0116 02:13:48.555186  987236 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-473102' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-473102/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-473102' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0116 02:13:48.674830  987236 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0116 02:13:48.674864  987236 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17967-971255/.minikube CaCertPath:/home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17967-971255/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17967-971255/.minikube}
	I0116 02:13:48.674890  987236 buildroot.go:174] setting up certificates
	I0116 02:13:48.674901  987236 provision.go:83] configureAuth start
	I0116 02:13:48.674911  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) Calling .GetMachineName
	I0116 02:13:48.675196  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) Calling .GetIP
	I0116 02:13:48.677772  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) DBG | domain ingress-addon-legacy-473102 has defined MAC address 52:54:00:32:65:d2 in network mk-ingress-addon-legacy-473102
	I0116 02:13:48.678166  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:65:d2", ip: ""} in network mk-ingress-addon-legacy-473102: {Iface:virbr1 ExpiryTime:2024-01-16 03:13:39 +0000 UTC Type:0 Mac:52:54:00:32:65:d2 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ingress-addon-legacy-473102 Clientid:01:52:54:00:32:65:d2}
	I0116 02:13:48.678200  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) DBG | domain ingress-addon-legacy-473102 has defined IP address 192.168.39.44 and MAC address 52:54:00:32:65:d2 in network mk-ingress-addon-legacy-473102
	I0116 02:13:48.678329  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) Calling .GetSSHHostname
	I0116 02:13:48.680437  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) DBG | domain ingress-addon-legacy-473102 has defined MAC address 52:54:00:32:65:d2 in network mk-ingress-addon-legacy-473102
	I0116 02:13:48.680767  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:65:d2", ip: ""} in network mk-ingress-addon-legacy-473102: {Iface:virbr1 ExpiryTime:2024-01-16 03:13:39 +0000 UTC Type:0 Mac:52:54:00:32:65:d2 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ingress-addon-legacy-473102 Clientid:01:52:54:00:32:65:d2}
	I0116 02:13:48.680797  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) DBG | domain ingress-addon-legacy-473102 has defined IP address 192.168.39.44 and MAC address 52:54:00:32:65:d2 in network mk-ingress-addon-legacy-473102
	I0116 02:13:48.680959  987236 provision.go:138] copyHostCerts
	I0116 02:13:48.680999  987236 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17967-971255/.minikube/key.pem
	I0116 02:13:48.681041  987236 exec_runner.go:144] found /home/jenkins/minikube-integration/17967-971255/.minikube/key.pem, removing ...
	I0116 02:13:48.681053  987236 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17967-971255/.minikube/key.pem
	I0116 02:13:48.681145  987236 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17967-971255/.minikube/key.pem (1675 bytes)
	I0116 02:13:48.681231  987236 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17967-971255/.minikube/ca.pem
	I0116 02:13:48.681255  987236 exec_runner.go:144] found /home/jenkins/minikube-integration/17967-971255/.minikube/ca.pem, removing ...
	I0116 02:13:48.681262  987236 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17967-971255/.minikube/ca.pem
	I0116 02:13:48.681287  987236 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17967-971255/.minikube/ca.pem (1082 bytes)
	I0116 02:13:48.681329  987236 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17967-971255/.minikube/cert.pem
	I0116 02:13:48.681344  987236 exec_runner.go:144] found /home/jenkins/minikube-integration/17967-971255/.minikube/cert.pem, removing ...
	I0116 02:13:48.681350  987236 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17967-971255/.minikube/cert.pem
	I0116 02:13:48.681369  987236 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17967-971255/.minikube/cert.pem (1123 bytes)
	I0116 02:13:48.681411  987236 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17967-971255/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-473102 san=[192.168.39.44 192.168.39.44 localhost 127.0.0.1 minikube ingress-addon-legacy-473102]
	I0116 02:13:48.957235  987236 provision.go:172] copyRemoteCerts
	I0116 02:13:48.957311  987236 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0116 02:13:48.957367  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) Calling .GetSSHHostname
	I0116 02:13:48.960382  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) DBG | domain ingress-addon-legacy-473102 has defined MAC address 52:54:00:32:65:d2 in network mk-ingress-addon-legacy-473102
	I0116 02:13:48.960708  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:65:d2", ip: ""} in network mk-ingress-addon-legacy-473102: {Iface:virbr1 ExpiryTime:2024-01-16 03:13:39 +0000 UTC Type:0 Mac:52:54:00:32:65:d2 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ingress-addon-legacy-473102 Clientid:01:52:54:00:32:65:d2}
	I0116 02:13:48.960748  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) DBG | domain ingress-addon-legacy-473102 has defined IP address 192.168.39.44 and MAC address 52:54:00:32:65:d2 in network mk-ingress-addon-legacy-473102
	I0116 02:13:48.960921  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) Calling .GetSSHPort
	I0116 02:13:48.961272  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) Calling .GetSSHKeyPath
	I0116 02:13:48.961517  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) Calling .GetSSHUsername
	I0116 02:13:48.961691  987236 sshutil.go:53] new ssh client: &{IP:192.168.39.44 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/ingress-addon-legacy-473102/id_rsa Username:docker}
	I0116 02:13:49.047489  987236 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0116 02:13:49.047565  987236 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0116 02:13:49.071835  987236 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-971255/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0116 02:13:49.071923  987236 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I0116 02:13:49.096646  987236 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-971255/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0116 02:13:49.096719  987236 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0116 02:13:49.120564  987236 provision.go:86] duration metric: configureAuth took 445.648616ms
	I0116 02:13:49.120598  987236 buildroot.go:189] setting minikube options for container-runtime
	I0116 02:13:49.120802  987236 config.go:182] Loaded profile config "ingress-addon-legacy-473102": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I0116 02:13:49.120886  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) Calling .GetSSHHostname
	I0116 02:13:49.123454  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) DBG | domain ingress-addon-legacy-473102 has defined MAC address 52:54:00:32:65:d2 in network mk-ingress-addon-legacy-473102
	I0116 02:13:49.123805  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:65:d2", ip: ""} in network mk-ingress-addon-legacy-473102: {Iface:virbr1 ExpiryTime:2024-01-16 03:13:39 +0000 UTC Type:0 Mac:52:54:00:32:65:d2 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ingress-addon-legacy-473102 Clientid:01:52:54:00:32:65:d2}
	I0116 02:13:49.123837  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) DBG | domain ingress-addon-legacy-473102 has defined IP address 192.168.39.44 and MAC address 52:54:00:32:65:d2 in network mk-ingress-addon-legacy-473102
	I0116 02:13:49.123916  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) Calling .GetSSHPort
	I0116 02:13:49.124144  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) Calling .GetSSHKeyPath
	I0116 02:13:49.124276  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) Calling .GetSSHKeyPath
	I0116 02:13:49.124392  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) Calling .GetSSHUsername
	I0116 02:13:49.124524  987236 main.go:141] libmachine: Using SSH client type: native
	I0116 02:13:49.124948  987236 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.44 22 <nil> <nil>}
	I0116 02:13:49.124982  987236 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0116 02:13:49.430915  987236 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0116 02:13:49.430947  987236 main.go:141] libmachine: Checking connection to Docker...
	I0116 02:13:49.430961  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) Calling .GetURL
	I0116 02:13:49.432231  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) DBG | Using libvirt version 6000000
	I0116 02:13:49.434436  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) DBG | domain ingress-addon-legacy-473102 has defined MAC address 52:54:00:32:65:d2 in network mk-ingress-addon-legacy-473102
	I0116 02:13:49.434791  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:65:d2", ip: ""} in network mk-ingress-addon-legacy-473102: {Iface:virbr1 ExpiryTime:2024-01-16 03:13:39 +0000 UTC Type:0 Mac:52:54:00:32:65:d2 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ingress-addon-legacy-473102 Clientid:01:52:54:00:32:65:d2}
	I0116 02:13:49.434816  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) DBG | domain ingress-addon-legacy-473102 has defined IP address 192.168.39.44 and MAC address 52:54:00:32:65:d2 in network mk-ingress-addon-legacy-473102
	I0116 02:13:49.434965  987236 main.go:141] libmachine: Docker is up and running!
	I0116 02:13:49.434979  987236 main.go:141] libmachine: Reticulating splines...
	I0116 02:13:49.434986  987236 client.go:171] LocalClient.Create took 25.737249238s
	I0116 02:13:49.435009  987236 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-473102" took 25.737308113s
	I0116 02:13:49.435025  987236 start.go:300] post-start starting for "ingress-addon-legacy-473102" (driver="kvm2")
	I0116 02:13:49.435040  987236 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0116 02:13:49.435060  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) Calling .DriverName
	I0116 02:13:49.435330  987236 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0116 02:13:49.435361  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) Calling .GetSSHHostname
	I0116 02:13:49.437502  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) DBG | domain ingress-addon-legacy-473102 has defined MAC address 52:54:00:32:65:d2 in network mk-ingress-addon-legacy-473102
	I0116 02:13:49.437826  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:65:d2", ip: ""} in network mk-ingress-addon-legacy-473102: {Iface:virbr1 ExpiryTime:2024-01-16 03:13:39 +0000 UTC Type:0 Mac:52:54:00:32:65:d2 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ingress-addon-legacy-473102 Clientid:01:52:54:00:32:65:d2}
	I0116 02:13:49.437862  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) DBG | domain ingress-addon-legacy-473102 has defined IP address 192.168.39.44 and MAC address 52:54:00:32:65:d2 in network mk-ingress-addon-legacy-473102
	I0116 02:13:49.438039  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) Calling .GetSSHPort
	I0116 02:13:49.438235  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) Calling .GetSSHKeyPath
	I0116 02:13:49.438402  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) Calling .GetSSHUsername
	I0116 02:13:49.438552  987236 sshutil.go:53] new ssh client: &{IP:192.168.39.44 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/ingress-addon-legacy-473102/id_rsa Username:docker}
	I0116 02:13:49.524090  987236 ssh_runner.go:195] Run: cat /etc/os-release
	I0116 02:13:49.528322  987236 info.go:137] Remote host: Buildroot 2021.02.12
	I0116 02:13:49.528354  987236 filesync.go:126] Scanning /home/jenkins/minikube-integration/17967-971255/.minikube/addons for local assets ...
	I0116 02:13:49.528440  987236 filesync.go:126] Scanning /home/jenkins/minikube-integration/17967-971255/.minikube/files for local assets ...
	I0116 02:13:49.528519  987236 filesync.go:149] local asset: /home/jenkins/minikube-integration/17967-971255/.minikube/files/etc/ssl/certs/9784822.pem -> 9784822.pem in /etc/ssl/certs
	I0116 02:13:49.528531  987236 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-971255/.minikube/files/etc/ssl/certs/9784822.pem -> /etc/ssl/certs/9784822.pem
	I0116 02:13:49.528616  987236 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0116 02:13:49.537766  987236 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/files/etc/ssl/certs/9784822.pem --> /etc/ssl/certs/9784822.pem (1708 bytes)
	I0116 02:13:49.559426  987236 start.go:303] post-start completed in 124.384136ms
	I0116 02:13:49.559495  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) Calling .GetConfigRaw
	I0116 02:13:49.560119  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) Calling .GetIP
	I0116 02:13:49.562688  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) DBG | domain ingress-addon-legacy-473102 has defined MAC address 52:54:00:32:65:d2 in network mk-ingress-addon-legacy-473102
	I0116 02:13:49.563054  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:65:d2", ip: ""} in network mk-ingress-addon-legacy-473102: {Iface:virbr1 ExpiryTime:2024-01-16 03:13:39 +0000 UTC Type:0 Mac:52:54:00:32:65:d2 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ingress-addon-legacy-473102 Clientid:01:52:54:00:32:65:d2}
	I0116 02:13:49.563080  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) DBG | domain ingress-addon-legacy-473102 has defined IP address 192.168.39.44 and MAC address 52:54:00:32:65:d2 in network mk-ingress-addon-legacy-473102
	I0116 02:13:49.563282  987236 profile.go:148] Saving config to /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/ingress-addon-legacy-473102/config.json ...
	I0116 02:13:49.563454  987236 start.go:128] duration metric: createHost completed in 25.885024572s
	I0116 02:13:49.563478  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) Calling .GetSSHHostname
	I0116 02:13:49.565881  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) DBG | domain ingress-addon-legacy-473102 has defined MAC address 52:54:00:32:65:d2 in network mk-ingress-addon-legacy-473102
	I0116 02:13:49.566245  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:65:d2", ip: ""} in network mk-ingress-addon-legacy-473102: {Iface:virbr1 ExpiryTime:2024-01-16 03:13:39 +0000 UTC Type:0 Mac:52:54:00:32:65:d2 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ingress-addon-legacy-473102 Clientid:01:52:54:00:32:65:d2}
	I0116 02:13:49.566277  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) DBG | domain ingress-addon-legacy-473102 has defined IP address 192.168.39.44 and MAC address 52:54:00:32:65:d2 in network mk-ingress-addon-legacy-473102
	I0116 02:13:49.566397  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) Calling .GetSSHPort
	I0116 02:13:49.566606  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) Calling .GetSSHKeyPath
	I0116 02:13:49.566764  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) Calling .GetSSHKeyPath
	I0116 02:13:49.566903  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) Calling .GetSSHUsername
	I0116 02:13:49.567083  987236 main.go:141] libmachine: Using SSH client type: native
	I0116 02:13:49.567492  987236 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.44 22 <nil> <nil>}
	I0116 02:13:49.567507  987236 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0116 02:13:49.678527  987236 main.go:141] libmachine: SSH cmd err, output: <nil>: 1705371229.663081303
	
	I0116 02:13:49.678558  987236 fix.go:206] guest clock: 1705371229.663081303
	I0116 02:13:49.678569  987236 fix.go:219] Guest: 2024-01-16 02:13:49.663081303 +0000 UTC Remote: 2024-01-16 02:13:49.563466104 +0000 UTC m=+30.740104424 (delta=99.615199ms)
	I0116 02:13:49.678597  987236 fix.go:190] guest clock delta is within tolerance: 99.615199ms
	I0116 02:13:49.678612  987236 start.go:83] releasing machines lock for "ingress-addon-legacy-473102", held for 26.000287057s
	I0116 02:13:49.678636  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) Calling .DriverName
	I0116 02:13:49.678964  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) Calling .GetIP
	I0116 02:13:49.681722  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) DBG | domain ingress-addon-legacy-473102 has defined MAC address 52:54:00:32:65:d2 in network mk-ingress-addon-legacy-473102
	I0116 02:13:49.682194  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:65:d2", ip: ""} in network mk-ingress-addon-legacy-473102: {Iface:virbr1 ExpiryTime:2024-01-16 03:13:39 +0000 UTC Type:0 Mac:52:54:00:32:65:d2 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ingress-addon-legacy-473102 Clientid:01:52:54:00:32:65:d2}
	I0116 02:13:49.682228  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) DBG | domain ingress-addon-legacy-473102 has defined IP address 192.168.39.44 and MAC address 52:54:00:32:65:d2 in network mk-ingress-addon-legacy-473102
	I0116 02:13:49.682480  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) Calling .DriverName
	I0116 02:13:49.683056  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) Calling .DriverName
	I0116 02:13:49.683258  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) Calling .DriverName
	I0116 02:13:49.683362  987236 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0116 02:13:49.683419  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) Calling .GetSSHHostname
	I0116 02:13:49.683529  987236 ssh_runner.go:195] Run: cat /version.json
	I0116 02:13:49.683558  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) Calling .GetSSHHostname
	I0116 02:13:49.686239  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) DBG | domain ingress-addon-legacy-473102 has defined MAC address 52:54:00:32:65:d2 in network mk-ingress-addon-legacy-473102
	I0116 02:13:49.686585  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) DBG | domain ingress-addon-legacy-473102 has defined MAC address 52:54:00:32:65:d2 in network mk-ingress-addon-legacy-473102
	I0116 02:13:49.686619  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:65:d2", ip: ""} in network mk-ingress-addon-legacy-473102: {Iface:virbr1 ExpiryTime:2024-01-16 03:13:39 +0000 UTC Type:0 Mac:52:54:00:32:65:d2 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ingress-addon-legacy-473102 Clientid:01:52:54:00:32:65:d2}
	I0116 02:13:49.686682  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) DBG | domain ingress-addon-legacy-473102 has defined IP address 192.168.39.44 and MAC address 52:54:00:32:65:d2 in network mk-ingress-addon-legacy-473102
	I0116 02:13:49.686818  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) Calling .GetSSHPort
	I0116 02:13:49.687018  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) Calling .GetSSHKeyPath
	I0116 02:13:49.687038  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:65:d2", ip: ""} in network mk-ingress-addon-legacy-473102: {Iface:virbr1 ExpiryTime:2024-01-16 03:13:39 +0000 UTC Type:0 Mac:52:54:00:32:65:d2 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ingress-addon-legacy-473102 Clientid:01:52:54:00:32:65:d2}
	I0116 02:13:49.687083  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) DBG | domain ingress-addon-legacy-473102 has defined IP address 192.168.39.44 and MAC address 52:54:00:32:65:d2 in network mk-ingress-addon-legacy-473102
	I0116 02:13:49.687180  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) Calling .GetSSHUsername
	I0116 02:13:49.687248  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) Calling .GetSSHPort
	I0116 02:13:49.687354  987236 sshutil.go:53] new ssh client: &{IP:192.168.39.44 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/ingress-addon-legacy-473102/id_rsa Username:docker}
	I0116 02:13:49.687387  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) Calling .GetSSHKeyPath
	I0116 02:13:49.687563  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) Calling .GetSSHUsername
	I0116 02:13:49.687692  987236 sshutil.go:53] new ssh client: &{IP:192.168.39.44 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/ingress-addon-legacy-473102/id_rsa Username:docker}
	I0116 02:13:49.795326  987236 ssh_runner.go:195] Run: systemctl --version
	I0116 02:13:49.801511  987236 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0116 02:13:49.957884  987236 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0116 02:13:49.964458  987236 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0116 02:13:49.964543  987236 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0116 02:13:49.980388  987236 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0116 02:13:49.980429  987236 start.go:475] detecting cgroup driver to use...
	I0116 02:13:49.980540  987236 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0116 02:13:49.994747  987236 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0116 02:13:50.007167  987236 docker.go:217] disabling cri-docker service (if available) ...
	I0116 02:13:50.007240  987236 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0116 02:13:50.020040  987236 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0116 02:13:50.032751  987236 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0116 02:13:50.136751  987236 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0116 02:13:50.252301  987236 docker.go:233] disabling docker service ...
	I0116 02:13:50.252384  987236 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0116 02:13:50.266612  987236 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0116 02:13:50.278379  987236 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0116 02:13:50.385332  987236 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0116 02:13:50.492222  987236 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0116 02:13:50.504750  987236 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0116 02:13:50.521968  987236 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0116 02:13:50.522054  987236 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 02:13:50.531366  987236 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0116 02:13:50.531446  987236 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 02:13:50.541097  987236 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 02:13:50.550569  987236 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 02:13:50.559813  987236 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0116 02:13:50.569106  987236 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0116 02:13:50.577128  987236 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0116 02:13:50.577184  987236 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0116 02:13:50.589577  987236 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0116 02:13:50.598792  987236 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0116 02:13:50.707229  987236 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0116 02:13:51.132117  987236 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0116 02:13:51.132196  987236 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0116 02:13:51.138012  987236 start.go:543] Will wait 60s for crictl version
	I0116 02:13:51.138079  987236 ssh_runner.go:195] Run: which crictl
	I0116 02:13:51.142119  987236 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0116 02:13:51.187189  987236 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0116 02:13:51.187305  987236 ssh_runner.go:195] Run: crio --version
	I0116 02:13:51.232183  987236 ssh_runner.go:195] Run: crio --version
	I0116 02:13:51.348246  987236 out.go:177] * Preparing Kubernetes v1.18.20 on CRI-O 1.24.1 ...
	I0116 02:13:51.430171  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) Calling .GetIP
	I0116 02:13:51.433575  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) DBG | domain ingress-addon-legacy-473102 has defined MAC address 52:54:00:32:65:d2 in network mk-ingress-addon-legacy-473102
	I0116 02:13:51.434065  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:65:d2", ip: ""} in network mk-ingress-addon-legacy-473102: {Iface:virbr1 ExpiryTime:2024-01-16 03:13:39 +0000 UTC Type:0 Mac:52:54:00:32:65:d2 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ingress-addon-legacy-473102 Clientid:01:52:54:00:32:65:d2}
	I0116 02:13:51.434099  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) DBG | domain ingress-addon-legacy-473102 has defined IP address 192.168.39.44 and MAC address 52:54:00:32:65:d2 in network mk-ingress-addon-legacy-473102
	I0116 02:13:51.434342  987236 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0116 02:13:51.438754  987236 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 02:13:51.451238  987236 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0116 02:13:51.451306  987236 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 02:13:51.486201  987236 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I0116 02:13:51.486305  987236 ssh_runner.go:195] Run: which lz4
	I0116 02:13:51.490433  987236 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-971255/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0116 02:13:51.490563  987236 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0116 02:13:51.494836  987236 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0116 02:13:51.494876  987236 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (495439307 bytes)
	I0116 02:13:53.359502  987236 crio.go:444] Took 1.868987 seconds to copy over tarball
	I0116 02:13:53.359612  987236 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0116 02:13:56.770258  987236 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.410612386s)
	I0116 02:13:56.770285  987236 crio.go:451] Took 3.410755 seconds to extract the tarball
	I0116 02:13:56.770294  987236 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0116 02:13:56.816823  987236 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 02:13:56.871370  987236 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I0116 02:13:56.871409  987236 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0116 02:13:56.871523  987236 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 02:13:56.871556  987236 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I0116 02:13:56.871603  987236 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I0116 02:13:56.871622  987236 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0116 02:13:56.871574  987236 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0116 02:13:56.871605  987236 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I0116 02:13:56.871777  987236 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I0116 02:13:56.871851  987236 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I0116 02:13:56.873015  987236 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I0116 02:13:56.873026  987236 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I0116 02:13:56.873035  987236 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I0116 02:13:56.873013  987236 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0116 02:13:56.873014  987236 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I0116 02:13:56.873089  987236 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 02:13:56.873122  987236 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0116 02:13:56.873188  987236 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I0116 02:13:57.028975  987236 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	I0116 02:13:57.041520  987236 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	I0116 02:13:57.042514  987236 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I0116 02:13:57.052169  987236 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	I0116 02:13:57.091703  987236 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I0116 02:13:57.105548  987236 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0116 02:13:57.109612  987236 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	I0116 02:13:57.110596  987236 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1" in container runtime
	I0116 02:13:57.110656  987236 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I0116 02:13:57.110706  987236 ssh_runner.go:195] Run: which crictl
	I0116 02:13:57.162852  987236 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 02:13:57.233752  987236 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba" in container runtime
	I0116 02:13:57.233817  987236 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f" in container runtime
	I0116 02:13:57.233835  987236 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I0116 02:13:57.233853  987236 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.3-0
	I0116 02:13:57.233892  987236 ssh_runner.go:195] Run: which crictl
	I0116 02:13:57.233907  987236 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346" in container runtime
	I0116 02:13:57.233952  987236 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I0116 02:13:57.233957  987236 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5" in container runtime
	I0116 02:13:57.233984  987236 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.7
	I0116 02:13:57.234002  987236 ssh_runner.go:195] Run: which crictl
	I0116 02:13:57.233897  987236 ssh_runner.go:195] Run: which crictl
	I0116 02:13:57.234033  987236 ssh_runner.go:195] Run: which crictl
	I0116 02:13:57.234042  987236 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.18.20
	I0116 02:13:57.233985  987236 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0116 02:13:57.234072  987236 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0116 02:13:57.234091  987236 ssh_runner.go:195] Run: which crictl
	I0116 02:13:57.234011  987236 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290" in container runtime
	I0116 02:13:57.234115  987236 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0116 02:13:57.234132  987236 ssh_runner.go:195] Run: which crictl
	I0116 02:13:57.383093  987236 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.18.20
	I0116 02:13:57.383157  987236 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.18.20
	I0116 02:13:57.383193  987236 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0116 02:13:57.383279  987236 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I0116 02:13:57.383365  987236 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.3-0
	I0116 02:13:57.383381  987236 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.7
	I0116 02:13:57.383369  987236 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17967-971255/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.20
	I0116 02:13:57.536911  987236 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17967-971255/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0116 02:13:57.537063  987236 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17967-971255/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.20
	I0116 02:13:57.537077  987236 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17967-971255/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.20
	I0116 02:13:57.538691  987236 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17967-971255/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7
	I0116 02:13:57.538766  987236 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17967-971255/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.20
	I0116 02:13:57.538830  987236 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17967-971255/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
	I0116 02:13:57.538885  987236 cache_images.go:92] LoadImages completed in 667.458925ms
	W0116 02:13:57.538968  987236 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17967-971255/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.20: no such file or directory
	I0116 02:13:57.539054  987236 ssh_runner.go:195] Run: crio config
	I0116 02:13:57.600673  987236 cni.go:84] Creating CNI manager for ""
	I0116 02:13:57.600698  987236 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 02:13:57.600724  987236 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0116 02:13:57.600748  987236 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.44 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-473102 NodeName:ingress-addon-legacy-473102 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.44"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.44 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/c
a.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0116 02:13:57.600937  987236 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.44
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "ingress-addon-legacy-473102"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.44
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.44"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0116 02:13:57.601040  987236 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=ingress-addon-legacy-473102 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.44
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-473102 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0116 02:13:57.601111  987236 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I0116 02:13:57.611807  987236 binaries.go:44] Found k8s binaries, skipping transfer
	I0116 02:13:57.611910  987236 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0116 02:13:57.622396  987236 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (435 bytes)
	I0116 02:13:57.638872  987236 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I0116 02:13:57.655712  987236 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2126 bytes)
	I0116 02:13:57.672622  987236 ssh_runner.go:195] Run: grep 192.168.39.44	control-plane.minikube.internal$ /etc/hosts
	I0116 02:13:57.676559  987236 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.44	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 02:13:57.689198  987236 certs.go:56] Setting up /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/ingress-addon-legacy-473102 for IP: 192.168.39.44
	I0116 02:13:57.689236  987236 certs.go:190] acquiring lock for shared ca certs: {Name:mk7c5920652340a030ac1e36e0fe21cc8437c491 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:13:57.689420  987236 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17967-971255/.minikube/ca.key
	I0116 02:13:57.689520  987236 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17967-971255/.minikube/proxy-client-ca.key
	I0116 02:13:57.689579  987236 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/ingress-addon-legacy-473102/client.key
	I0116 02:13:57.689592  987236 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/ingress-addon-legacy-473102/client.crt with IP's: []
	I0116 02:13:57.945463  987236 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/ingress-addon-legacy-473102/client.crt ...
	I0116 02:13:57.945511  987236 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/ingress-addon-legacy-473102/client.crt: {Name:mk1d5dcd8b8e79ae344383931ed9766381897c08 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:13:57.945715  987236 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/ingress-addon-legacy-473102/client.key ...
	I0116 02:13:57.945743  987236 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/ingress-addon-legacy-473102/client.key: {Name:mkad829767e2a2dd9efe5d8a094b93f361f0f201 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:13:57.945874  987236 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/ingress-addon-legacy-473102/apiserver.key.6d5b308a
	I0116 02:13:57.945903  987236 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/ingress-addon-legacy-473102/apiserver.crt.6d5b308a with IP's: [192.168.39.44 10.96.0.1 127.0.0.1 10.0.0.1]
	I0116 02:13:58.118558  987236 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/ingress-addon-legacy-473102/apiserver.crt.6d5b308a ...
	I0116 02:13:58.118593  987236 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/ingress-addon-legacy-473102/apiserver.crt.6d5b308a: {Name:mkda8a8288ea5d425778a8472236ca2435c963d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:13:58.118762  987236 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/ingress-addon-legacy-473102/apiserver.key.6d5b308a ...
	I0116 02:13:58.118778  987236 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/ingress-addon-legacy-473102/apiserver.key.6d5b308a: {Name:mkceb405df5b26a7070fc293f7d8cbf5a50ff3d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:13:58.118845  987236 certs.go:337] copying /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/ingress-addon-legacy-473102/apiserver.crt.6d5b308a -> /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/ingress-addon-legacy-473102/apiserver.crt
	I0116 02:13:58.118967  987236 certs.go:341] copying /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/ingress-addon-legacy-473102/apiserver.key.6d5b308a -> /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/ingress-addon-legacy-473102/apiserver.key
	I0116 02:13:58.119025  987236 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/ingress-addon-legacy-473102/proxy-client.key
	I0116 02:13:58.119046  987236 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/ingress-addon-legacy-473102/proxy-client.crt with IP's: []
	I0116 02:13:58.416537  987236 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/ingress-addon-legacy-473102/proxy-client.crt ...
	I0116 02:13:58.416574  987236 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/ingress-addon-legacy-473102/proxy-client.crt: {Name:mk038f61cc483ccc47110e5dde4ecaabfc18c558 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:13:58.416738  987236 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/ingress-addon-legacy-473102/proxy-client.key ...
	I0116 02:13:58.416756  987236 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/ingress-addon-legacy-473102/proxy-client.key: {Name:mkd5f36029064c3059fc19eb20b5e3df4ebb1130 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:13:58.416829  987236 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/ingress-addon-legacy-473102/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0116 02:13:58.416848  987236 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/ingress-addon-legacy-473102/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0116 02:13:58.416859  987236 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/ingress-addon-legacy-473102/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0116 02:13:58.416870  987236 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/ingress-addon-legacy-473102/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0116 02:13:58.416883  987236 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-971255/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0116 02:13:58.416896  987236 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-971255/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0116 02:13:58.416908  987236 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-971255/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0116 02:13:58.416921  987236 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-971255/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0116 02:13:58.416987  987236 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/home/jenkins/minikube-integration/17967-971255/.minikube/certs/978482.pem (1338 bytes)
	W0116 02:13:58.417020  987236 certs.go:433] ignoring /home/jenkins/minikube-integration/17967-971255/.minikube/certs/home/jenkins/minikube-integration/17967-971255/.minikube/certs/978482_empty.pem, impossibly tiny 0 bytes
	I0116 02:13:58.417033  987236 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca-key.pem (1675 bytes)
	I0116 02:13:58.417058  987236 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca.pem (1082 bytes)
	I0116 02:13:58.417092  987236 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/home/jenkins/minikube-integration/17967-971255/.minikube/certs/cert.pem (1123 bytes)
	I0116 02:13:58.417114  987236 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/home/jenkins/minikube-integration/17967-971255/.minikube/certs/key.pem (1675 bytes)
	I0116 02:13:58.417154  987236 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-971255/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17967-971255/.minikube/files/etc/ssl/certs/9784822.pem (1708 bytes)
	I0116 02:13:58.417203  987236 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-971255/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0116 02:13:58.417230  987236 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/978482.pem -> /usr/share/ca-certificates/978482.pem
	I0116 02:13:58.417242  987236 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-971255/.minikube/files/etc/ssl/certs/9784822.pem -> /usr/share/ca-certificates/9784822.pem
	I0116 02:13:58.417897  987236 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/ingress-addon-legacy-473102/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0116 02:13:58.442553  987236 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/ingress-addon-legacy-473102/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0116 02:13:58.466721  987236 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/ingress-addon-legacy-473102/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0116 02:13:58.490636  987236 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/ingress-addon-legacy-473102/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0116 02:13:58.513551  987236 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0116 02:13:58.537198  987236 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0116 02:13:58.559982  987236 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0116 02:13:58.582983  987236 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0116 02:13:58.606933  987236 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0116 02:13:58.630219  987236 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/certs/978482.pem --> /usr/share/ca-certificates/978482.pem (1338 bytes)
	I0116 02:13:58.653594  987236 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/files/etc/ssl/certs/9784822.pem --> /usr/share/ca-certificates/9784822.pem (1708 bytes)
	I0116 02:13:58.676678  987236 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0116 02:13:58.693274  987236 ssh_runner.go:195] Run: openssl version
	I0116 02:13:58.699049  987236 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/978482.pem && ln -fs /usr/share/ca-certificates/978482.pem /etc/ssl/certs/978482.pem"
	I0116 02:13:58.710726  987236 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/978482.pem
	I0116 02:13:58.715762  987236 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 16 02:10 /usr/share/ca-certificates/978482.pem
	I0116 02:13:58.715824  987236 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/978482.pem
	I0116 02:13:58.721764  987236 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/978482.pem /etc/ssl/certs/51391683.0"
	I0116 02:13:58.734374  987236 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9784822.pem && ln -fs /usr/share/ca-certificates/9784822.pem /etc/ssl/certs/9784822.pem"
	I0116 02:13:58.746793  987236 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9784822.pem
	I0116 02:13:58.751539  987236 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 16 02:10 /usr/share/ca-certificates/9784822.pem
	I0116 02:13:58.751627  987236 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9784822.pem
	I0116 02:13:58.757368  987236 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9784822.pem /etc/ssl/certs/3ec20f2e.0"
	I0116 02:13:58.768512  987236 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0116 02:13:58.779805  987236 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0116 02:13:58.784681  987236 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 16 02:01 /usr/share/ca-certificates/minikubeCA.pem
	I0116 02:13:58.784756  987236 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0116 02:13:58.790515  987236 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0116 02:13:58.801681  987236 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0116 02:13:58.805923  987236 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0116 02:13:58.805980  987236 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-473102 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.18.20 ClusterName:ingress-addon-legacy-473102 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.44 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mou
ntMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 02:13:58.806075  987236 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0116 02:13:58.806133  987236 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 02:13:58.850863  987236 cri.go:89] found id: ""
	I0116 02:13:58.850953  987236 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0116 02:13:58.861237  987236 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0116 02:13:58.871499  987236 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0116 02:13:58.881340  987236 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0116 02:13:58.943896  987236 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0116 02:13:59.002580  987236 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0116 02:13:59.002803  987236 kubeadm.go:322] [preflight] Running pre-flight checks
	I0116 02:13:59.143578  987236 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0116 02:13:59.143719  987236 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0116 02:13:59.143899  987236 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0116 02:13:59.401031  987236 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0116 02:13:59.402610  987236 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0116 02:13:59.402846  987236 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0116 02:13:59.527535  987236 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0116 02:13:59.529579  987236 out.go:204]   - Generating certificates and keys ...
	I0116 02:13:59.529694  987236 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0116 02:13:59.529782  987236 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0116 02:13:59.776264  987236 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0116 02:13:59.895885  987236 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0116 02:14:00.045860  987236 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0116 02:14:00.092844  987236 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0116 02:14:00.216905  987236 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0116 02:14:00.217094  987236 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-473102 localhost] and IPs [192.168.39.44 127.0.0.1 ::1]
	I0116 02:14:00.541548  987236 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0116 02:14:00.541763  987236 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-473102 localhost] and IPs [192.168.39.44 127.0.0.1 ::1]
	I0116 02:14:00.747807  987236 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0116 02:14:00.877497  987236 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0116 02:14:00.992363  987236 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0116 02:14:00.992503  987236 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0116 02:14:01.124693  987236 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0116 02:14:01.372969  987236 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0116 02:14:01.765465  987236 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0116 02:14:01.916556  987236 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0116 02:14:01.917143  987236 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0116 02:14:01.919029  987236 out.go:204]   - Booting up control plane ...
	I0116 02:14:01.919178  987236 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0116 02:14:01.931351  987236 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0116 02:14:01.932333  987236 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0116 02:14:01.933158  987236 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0116 02:14:01.935507  987236 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0116 02:14:11.437257  987236 kubeadm.go:322] [apiclient] All control plane components are healthy after 9.502843 seconds
	I0116 02:14:11.437444  987236 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0116 02:14:11.460549  987236 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I0116 02:14:11.984409  987236 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0116 02:14:11.984578  987236 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-473102 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0116 02:14:12.493255  987236 kubeadm.go:322] [bootstrap-token] Using token: 0k7uzd.qqxqdn188057bze9
	I0116 02:14:12.494808  987236 out.go:204]   - Configuring RBAC rules ...
	I0116 02:14:12.494917  987236 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0116 02:14:12.504966  987236 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0116 02:14:12.512724  987236 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0116 02:14:12.516397  987236 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0116 02:14:12.521852  987236 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0116 02:14:12.531018  987236 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0116 02:14:12.540640  987236 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0116 02:14:12.799975  987236 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0116 02:14:12.932220  987236 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0116 02:14:12.933476  987236 kubeadm.go:322] 
	I0116 02:14:12.933576  987236 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0116 02:14:12.933611  987236 kubeadm.go:322] 
	I0116 02:14:12.933722  987236 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0116 02:14:12.933743  987236 kubeadm.go:322] 
	I0116 02:14:12.933788  987236 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0116 02:14:12.933891  987236 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0116 02:14:12.933937  987236 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0116 02:14:12.933944  987236 kubeadm.go:322] 
	I0116 02:14:12.933986  987236 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0116 02:14:12.934079  987236 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0116 02:14:12.934171  987236 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0116 02:14:12.934180  987236 kubeadm.go:322] 
	I0116 02:14:12.934272  987236 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0116 02:14:12.934362  987236 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0116 02:14:12.934385  987236 kubeadm.go:322] 
	I0116 02:14:12.934516  987236 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 0k7uzd.qqxqdn188057bze9 \
	I0116 02:14:12.934660  987236 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:2b2d63eda0b757db09586d44c0338fc83976b062d99727312f950d3846e4844b \
	I0116 02:14:12.934684  987236 kubeadm.go:322]     --control-plane 
	I0116 02:14:12.934689  987236 kubeadm.go:322] 
	I0116 02:14:12.934784  987236 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0116 02:14:12.934801  987236 kubeadm.go:322] 
	I0116 02:14:12.934884  987236 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 0k7uzd.qqxqdn188057bze9 \
	I0116 02:14:12.935031  987236 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:2b2d63eda0b757db09586d44c0338fc83976b062d99727312f950d3846e4844b 
	I0116 02:14:12.935489  987236 kubeadm.go:322] W0116 02:13:58.996275     954 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0116 02:14:12.935624  987236 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0116 02:14:12.935757  987236 kubeadm.go:322] W0116 02:14:01.926309     954 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0116 02:14:12.935897  987236 kubeadm.go:322] W0116 02:14:01.927462     954 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0116 02:14:12.935934  987236 cni.go:84] Creating CNI manager for ""
	I0116 02:14:12.935944  987236 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 02:14:12.937662  987236 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0116 02:14:12.938979  987236 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0116 02:14:12.950934  987236 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0116 02:14:12.976246  987236 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0116 02:14:12.976339  987236 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:14:12.976365  987236 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=880ec6d67bbc7fe8882aaa6543d5a07427c973fd minikube.k8s.io/name=ingress-addon-legacy-473102 minikube.k8s.io/updated_at=2024_01_16T02_14_12_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:14:13.201407  987236 ops.go:34] apiserver oom_adj: -16
	I0116 02:14:13.205709  987236 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:14:13.706576  987236 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:14:14.206707  987236 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:14:14.706450  987236 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:14:15.206099  987236 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:14:15.706532  987236 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:14:16.206632  987236 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:14:16.705829  987236 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:14:17.205752  987236 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:14:17.706065  987236 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:14:18.206760  987236 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:14:18.706498  987236 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:14:19.206096  987236 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:14:19.706252  987236 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:14:20.206642  987236 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:14:20.705902  987236 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:14:21.206733  987236 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:14:21.706693  987236 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:14:22.206648  987236 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:14:22.706163  987236 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:14:23.205786  987236 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:14:23.706416  987236 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:14:24.205858  987236 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:14:24.705724  987236 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:14:25.206027  987236 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:14:25.706493  987236 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:14:26.205971  987236 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:14:26.706051  987236 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:14:27.206574  987236 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:14:27.706620  987236 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:14:28.205989  987236 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:14:28.705996  987236 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:14:29.205883  987236 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:14:29.354330  987236 kubeadm.go:1088] duration metric: took 16.378078854s to wait for elevateKubeSystemPrivileges.
	I0116 02:14:29.354376  987236 kubeadm.go:406] StartCluster complete in 30.548399375s
	I0116 02:14:29.354435  987236 settings.go:142] acquiring lock: {Name:mk39c69fb5454efd5afab021c02c3bec1f1b4e6b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:14:29.354532  987236 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17967-971255/kubeconfig
	I0116 02:14:29.355603  987236 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17967-971255/kubeconfig: {Name:mk5ca3018274b0a03599cf3631785c0629f81a67 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:14:29.355937  987236 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0116 02:14:29.356046  987236 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0116 02:14:29.356126  987236 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-473102"
	I0116 02:14:29.356160  987236 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-473102"
	I0116 02:14:29.356184  987236 addons.go:234] Setting addon storage-provisioner=true in "ingress-addon-legacy-473102"
	I0116 02:14:29.356188  987236 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-473102"
	I0116 02:14:29.356243  987236 config.go:182] Loaded profile config "ingress-addon-legacy-473102": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I0116 02:14:29.356286  987236 host.go:66] Checking if "ingress-addon-legacy-473102" exists ...
	I0116 02:14:29.356703  987236 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 02:14:29.356740  987236 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:14:29.356745  987236 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 02:14:29.356779  987236 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:14:29.356668  987236 kapi.go:59] client config for ingress-addon-legacy-473102: &rest.Config{Host:"https://192.168.39.44:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17967-971255/.minikube/profiles/ingress-addon-legacy-473102/client.crt", KeyFile:"/home/jenkins/minikube-integration/17967-971255/.minikube/profiles/ingress-addon-legacy-473102/client.key", CAFile:"/home/jenkins/minikube-integration/17967-971255/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]u
int8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19be0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0116 02:14:29.357573  987236 cert_rotation.go:137] Starting client certificate rotation controller
	I0116 02:14:29.373243  987236 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34275
	I0116 02:14:29.373266  987236 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33099
	I0116 02:14:29.373723  987236 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:14:29.373797  987236 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:14:29.374341  987236 main.go:141] libmachine: Using API Version  1
	I0116 02:14:29.374348  987236 main.go:141] libmachine: Using API Version  1
	I0116 02:14:29.374358  987236 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:14:29.374371  987236 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:14:29.374728  987236 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:14:29.374741  987236 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:14:29.374966  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) Calling .GetState
	I0116 02:14:29.375326  987236 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 02:14:29.375378  987236 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:14:29.377529  987236 kapi.go:59] client config for ingress-addon-legacy-473102: &rest.Config{Host:"https://192.168.39.44:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17967-971255/.minikube/profiles/ingress-addon-legacy-473102/client.crt", KeyFile:"/home/jenkins/minikube-integration/17967-971255/.minikube/profiles/ingress-addon-legacy-473102/client.key", CAFile:"/home/jenkins/minikube-integration/17967-971255/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]u
int8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19be0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0116 02:14:29.377936  987236 addons.go:234] Setting addon default-storageclass=true in "ingress-addon-legacy-473102"
	I0116 02:14:29.377985  987236 host.go:66] Checking if "ingress-addon-legacy-473102" exists ...
	I0116 02:14:29.378440  987236 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 02:14:29.378496  987236 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:14:29.392717  987236 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37097
	I0116 02:14:29.393287  987236 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:14:29.393864  987236 main.go:141] libmachine: Using API Version  1
	I0116 02:14:29.393888  987236 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:14:29.393924  987236 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41339
	I0116 02:14:29.394328  987236 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:14:29.394328  987236 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:14:29.394587  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) Calling .GetState
	I0116 02:14:29.394797  987236 main.go:141] libmachine: Using API Version  1
	I0116 02:14:29.394810  987236 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:14:29.395091  987236 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:14:29.395669  987236 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 02:14:29.395721  987236 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:14:29.396519  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) Calling .DriverName
	I0116 02:14:29.398792  987236 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 02:14:29.400399  987236 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 02:14:29.400420  987236 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0116 02:14:29.400441  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) Calling .GetSSHHostname
	I0116 02:14:29.404406  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) DBG | domain ingress-addon-legacy-473102 has defined MAC address 52:54:00:32:65:d2 in network mk-ingress-addon-legacy-473102
	I0116 02:14:29.404924  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:65:d2", ip: ""} in network mk-ingress-addon-legacy-473102: {Iface:virbr1 ExpiryTime:2024-01-16 03:13:39 +0000 UTC Type:0 Mac:52:54:00:32:65:d2 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ingress-addon-legacy-473102 Clientid:01:52:54:00:32:65:d2}
	I0116 02:14:29.404948  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) DBG | domain ingress-addon-legacy-473102 has defined IP address 192.168.39.44 and MAC address 52:54:00:32:65:d2 in network mk-ingress-addon-legacy-473102
	I0116 02:14:29.405186  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) Calling .GetSSHPort
	I0116 02:14:29.405419  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) Calling .GetSSHKeyPath
	I0116 02:14:29.405611  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) Calling .GetSSHUsername
	I0116 02:14:29.405755  987236 sshutil.go:53] new ssh client: &{IP:192.168.39.44 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/ingress-addon-legacy-473102/id_rsa Username:docker}
	I0116 02:14:29.413087  987236 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43247
	I0116 02:14:29.413625  987236 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:14:29.414122  987236 main.go:141] libmachine: Using API Version  1
	I0116 02:14:29.414147  987236 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:14:29.414515  987236 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:14:29.414699  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) Calling .GetState
	I0116 02:14:29.416309  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) Calling .DriverName
	I0116 02:14:29.416545  987236 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0116 02:14:29.416561  987236 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0116 02:14:29.416576  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) Calling .GetSSHHostname
	I0116 02:14:29.419412  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) DBG | domain ingress-addon-legacy-473102 has defined MAC address 52:54:00:32:65:d2 in network mk-ingress-addon-legacy-473102
	I0116 02:14:29.419859  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:65:d2", ip: ""} in network mk-ingress-addon-legacy-473102: {Iface:virbr1 ExpiryTime:2024-01-16 03:13:39 +0000 UTC Type:0 Mac:52:54:00:32:65:d2 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ingress-addon-legacy-473102 Clientid:01:52:54:00:32:65:d2}
	I0116 02:14:29.419890  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) DBG | domain ingress-addon-legacy-473102 has defined IP address 192.168.39.44 and MAC address 52:54:00:32:65:d2 in network mk-ingress-addon-legacy-473102
	I0116 02:14:29.419968  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) Calling .GetSSHPort
	I0116 02:14:29.420187  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) Calling .GetSSHKeyPath
	I0116 02:14:29.420341  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) Calling .GetSSHUsername
	I0116 02:14:29.420469  987236 sshutil.go:53] new ssh client: &{IP:192.168.39.44 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/ingress-addon-legacy-473102/id_rsa Username:docker}
	I0116 02:14:29.556691  987236 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0116 02:14:29.567018  987236 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 02:14:29.578400  987236 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0116 02:14:29.920114  987236 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-473102" context rescaled to 1 replicas
	I0116 02:14:29.920198  987236 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.44 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0116 02:14:29.922241  987236 out.go:177] * Verifying Kubernetes components...
	I0116 02:14:29.923688  987236 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 02:14:30.107002  987236 start.go:929] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0116 02:14:30.200730  987236 main.go:141] libmachine: Making call to close driver server
	I0116 02:14:30.200756  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) Calling .Close
	I0116 02:14:30.200828  987236 main.go:141] libmachine: Making call to close driver server
	I0116 02:14:30.200863  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) Calling .Close
	I0116 02:14:30.201094  987236 main.go:141] libmachine: Successfully made call to close driver server
	I0116 02:14:30.201112  987236 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 02:14:30.201127  987236 main.go:141] libmachine: Making call to close driver server
	I0116 02:14:30.201135  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) Calling .Close
	I0116 02:14:30.201194  987236 main.go:141] libmachine: Successfully made call to close driver server
	I0116 02:14:30.201231  987236 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 02:14:30.201185  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) DBG | Closing plugin on server side
	I0116 02:14:30.201246  987236 main.go:141] libmachine: Making call to close driver server
	I0116 02:14:30.201254  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) Calling .Close
	I0116 02:14:30.201480  987236 main.go:141] libmachine: Successfully made call to close driver server
	I0116 02:14:30.201513  987236 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 02:14:30.201947  987236 main.go:141] libmachine: Successfully made call to close driver server
	I0116 02:14:30.201966  987236 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 02:14:30.202023  987236 kapi.go:59] client config for ingress-addon-legacy-473102: &rest.Config{Host:"https://192.168.39.44:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17967-971255/.minikube/profiles/ingress-addon-legacy-473102/client.crt", KeyFile:"/home/jenkins/minikube-integration/17967-971255/.minikube/profiles/ingress-addon-legacy-473102/client.key", CAFile:"/home/jenkins/minikube-integration/17967-971255/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]u
int8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19be0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0116 02:14:30.202373  987236 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-473102" to be "Ready" ...
	I0116 02:14:30.215047  987236 node_ready.go:49] node "ingress-addon-legacy-473102" has status "Ready":"True"
	I0116 02:14:30.215080  987236 node_ready.go:38] duration metric: took 12.682358ms waiting for node "ingress-addon-legacy-473102" to be "Ready" ...
	I0116 02:14:30.215093  987236 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 02:14:30.225638  987236 main.go:141] libmachine: Making call to close driver server
	I0116 02:14:30.225666  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) Calling .Close
	I0116 02:14:30.226077  987236 main.go:141] libmachine: Successfully made call to close driver server
	I0116 02:14:30.226141  987236 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 02:14:30.226106  987236 main.go:141] libmachine: (ingress-addon-legacy-473102) DBG | Closing plugin on server side
	I0116 02:14:30.227963  987236 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0116 02:14:30.228895  987236 addons.go:505] enable addons completed in 872.849934ms: enabled=[storage-provisioner default-storageclass]
	I0116 02:14:30.229975  987236 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-tvdh2" in "kube-system" namespace to be "Ready" ...
	I0116 02:14:32.240029  987236 pod_ready.go:102] pod "coredns-66bff467f8-tvdh2" in "kube-system" namespace has status "Ready":"False"
	I0116 02:14:34.738473  987236 pod_ready.go:92] pod "coredns-66bff467f8-tvdh2" in "kube-system" namespace has status "Ready":"True"
	I0116 02:14:34.738501  987236 pod_ready.go:81] duration metric: took 4.508506371s waiting for pod "coredns-66bff467f8-tvdh2" in "kube-system" namespace to be "Ready" ...
	I0116 02:14:34.738511  987236 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-473102" in "kube-system" namespace to be "Ready" ...
	I0116 02:14:34.742968  987236 pod_ready.go:92] pod "etcd-ingress-addon-legacy-473102" in "kube-system" namespace has status "Ready":"True"
	I0116 02:14:34.742990  987236 pod_ready.go:81] duration metric: took 4.472616ms waiting for pod "etcd-ingress-addon-legacy-473102" in "kube-system" namespace to be "Ready" ...
	I0116 02:14:34.743000  987236 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-473102" in "kube-system" namespace to be "Ready" ...
	I0116 02:14:34.747203  987236 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-473102" in "kube-system" namespace has status "Ready":"True"
	I0116 02:14:34.747234  987236 pod_ready.go:81] duration metric: took 4.226767ms waiting for pod "kube-apiserver-ingress-addon-legacy-473102" in "kube-system" namespace to be "Ready" ...
	I0116 02:14:34.747243  987236 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-473102" in "kube-system" namespace to be "Ready" ...
	I0116 02:14:34.754429  987236 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-473102" in "kube-system" namespace has status "Ready":"True"
	I0116 02:14:34.754452  987236 pod_ready.go:81] duration metric: took 7.202607ms waiting for pod "kube-controller-manager-ingress-addon-legacy-473102" in "kube-system" namespace to be "Ready" ...
	I0116 02:14:34.754462  987236 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-27x9r" in "kube-system" namespace to be "Ready" ...
	I0116 02:14:34.759695  987236 pod_ready.go:92] pod "kube-proxy-27x9r" in "kube-system" namespace has status "Ready":"True"
	I0116 02:14:34.759716  987236 pod_ready.go:81] duration metric: took 5.247729ms waiting for pod "kube-proxy-27x9r" in "kube-system" namespace to be "Ready" ...
	I0116 02:14:34.759728  987236 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-473102" in "kube-system" namespace to be "Ready" ...
	I0116 02:14:34.931136  987236 request.go:629] Waited for 171.323105ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.44:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-473102
	I0116 02:14:35.131783  987236 request.go:629] Waited for 197.384209ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.44:8443/api/v1/nodes/ingress-addon-legacy-473102
	I0116 02:14:35.135419  987236 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-473102" in "kube-system" namespace has status "Ready":"True"
	I0116 02:14:35.135446  987236 pod_ready.go:81] duration metric: took 375.710254ms waiting for pod "kube-scheduler-ingress-addon-legacy-473102" in "kube-system" namespace to be "Ready" ...
	I0116 02:14:35.135460  987236 pod_ready.go:38] duration metric: took 4.920350841s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 02:14:35.135483  987236 api_server.go:52] waiting for apiserver process to appear ...
	I0116 02:14:35.135564  987236 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 02:14:35.149160  987236 api_server.go:72] duration metric: took 5.228916807s to wait for apiserver process to appear ...
	I0116 02:14:35.149252  987236 api_server.go:88] waiting for apiserver healthz status ...
	I0116 02:14:35.149277  987236 api_server.go:253] Checking apiserver healthz at https://192.168.39.44:8443/healthz ...
	I0116 02:14:35.156187  987236 api_server.go:279] https://192.168.39.44:8443/healthz returned 200:
	ok
	I0116 02:14:35.157193  987236 api_server.go:141] control plane version: v1.18.20
	I0116 02:14:35.157217  987236 api_server.go:131] duration metric: took 7.959045ms to wait for apiserver health ...
	I0116 02:14:35.157227  987236 system_pods.go:43] waiting for kube-system pods to appear ...
	I0116 02:14:35.331574  987236 request.go:629] Waited for 174.261302ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.44:8443/api/v1/namespaces/kube-system/pods
	I0116 02:14:35.337164  987236 system_pods.go:59] 7 kube-system pods found
	I0116 02:14:35.337194  987236 system_pods.go:61] "coredns-66bff467f8-tvdh2" [e86570dc-e510-4c31-9ced-84efde5bb99e] Running
	I0116 02:14:35.337203  987236 system_pods.go:61] "etcd-ingress-addon-legacy-473102" [e5e83b52-460a-4c1d-bfca-9cdd168774be] Running
	I0116 02:14:35.337208  987236 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-473102" [aaabe8e4-52c0-4ed6-a0b4-1f3641055a5d] Running
	I0116 02:14:35.337212  987236 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-473102" [ca7bf28c-2e7d-44b7-88e8-0c5610531ca0] Running
	I0116 02:14:35.337215  987236 system_pods.go:61] "kube-proxy-27x9r" [737bd5cd-321a-4a5d-b1b8-5f987660e261] Running
	I0116 02:14:35.337219  987236 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-473102" [24606b32-533b-4a65-8184-090d819f4227] Running
	I0116 02:14:35.337223  987236 system_pods.go:61] "storage-provisioner" [3b6fc219-9722-484b-a781-81e992240a24] Running
	I0116 02:14:35.337228  987236 system_pods.go:74] duration metric: took 179.996219ms to wait for pod list to return data ...
	I0116 02:14:35.337235  987236 default_sa.go:34] waiting for default service account to be created ...
	I0116 02:14:35.531692  987236 request.go:629] Waited for 194.330642ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.44:8443/api/v1/namespaces/default/serviceaccounts
	I0116 02:14:35.534792  987236 default_sa.go:45] found service account: "default"
	I0116 02:14:35.534823  987236 default_sa.go:55] duration metric: took 197.58107ms for default service account to be created ...
	I0116 02:14:35.534831  987236 system_pods.go:116] waiting for k8s-apps to be running ...
	I0116 02:14:35.731354  987236 request.go:629] Waited for 196.425068ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.44:8443/api/v1/namespaces/kube-system/pods
	I0116 02:14:35.737492  987236 system_pods.go:86] 7 kube-system pods found
	I0116 02:14:35.737522  987236 system_pods.go:89] "coredns-66bff467f8-tvdh2" [e86570dc-e510-4c31-9ced-84efde5bb99e] Running
	I0116 02:14:35.737528  987236 system_pods.go:89] "etcd-ingress-addon-legacy-473102" [e5e83b52-460a-4c1d-bfca-9cdd168774be] Running
	I0116 02:14:35.737532  987236 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-473102" [aaabe8e4-52c0-4ed6-a0b4-1f3641055a5d] Running
	I0116 02:14:35.737537  987236 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-473102" [ca7bf28c-2e7d-44b7-88e8-0c5610531ca0] Running
	I0116 02:14:35.737540  987236 system_pods.go:89] "kube-proxy-27x9r" [737bd5cd-321a-4a5d-b1b8-5f987660e261] Running
	I0116 02:14:35.737544  987236 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-473102" [24606b32-533b-4a65-8184-090d819f4227] Running
	I0116 02:14:35.737550  987236 system_pods.go:89] "storage-provisioner" [3b6fc219-9722-484b-a781-81e992240a24] Running
	I0116 02:14:35.737557  987236 system_pods.go:126] duration metric: took 202.720537ms to wait for k8s-apps to be running ...
	I0116 02:14:35.737566  987236 system_svc.go:44] waiting for kubelet service to be running ....
	I0116 02:14:35.737615  987236 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 02:14:35.752910  987236 system_svc.go:56] duration metric: took 15.329361ms WaitForService to wait for kubelet.
	I0116 02:14:35.752963  987236 kubeadm.go:581] duration metric: took 5.832719722s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0116 02:14:35.752994  987236 node_conditions.go:102] verifying NodePressure condition ...
	I0116 02:14:35.931536  987236 request.go:629] Waited for 178.43332ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.44:8443/api/v1/nodes
	I0116 02:14:35.934893  987236 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 02:14:35.934925  987236 node_conditions.go:123] node cpu capacity is 2
	I0116 02:14:35.934936  987236 node_conditions.go:105] duration metric: took 181.935828ms to run NodePressure ...
	I0116 02:14:35.934948  987236 start.go:228] waiting for startup goroutines ...
	I0116 02:14:35.934954  987236 start.go:233] waiting for cluster config update ...
	I0116 02:14:35.934974  987236 start.go:242] writing updated cluster config ...
	I0116 02:14:35.935237  987236 ssh_runner.go:195] Run: rm -f paused
	I0116 02:14:35.985362  987236 start.go:600] kubectl: 1.29.0, cluster: 1.18.20 (minor skew: 11)
	I0116 02:14:35.987403  987236 out.go:177] 
	W0116 02:14:35.988859  987236 out.go:239] ! /usr/local/bin/kubectl is version 1.29.0, which may have incompatibilities with Kubernetes 1.18.20.
	I0116 02:14:35.990381  987236 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I0116 02:14:35.991753  987236 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-473102" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	-- Journal begins at Tue 2024-01-16 02:13:36 UTC, ends at Tue 2024-01-16 02:17:41 UTC. --
	Jan 16 02:17:41 ingress-addon-legacy-473102 crio[715]: time="2024-01-16 02:17:41.714419360Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705371461714392255,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:202825,},InodesUsed:&UInt64Value{Value:85,},},},}" file="go-grpc-middleware/chain.go:25" id=0a092d2d-d799-48ff-8fb5-48c660f7315d name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 02:17:41 ingress-addon-legacy-473102 crio[715]: time="2024-01-16 02:17:41.716306053Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=56f9cb46-fddc-4d07-a69f-7ba6141dcca9 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 02:17:41 ingress-addon-legacy-473102 crio[715]: time="2024-01-16 02:17:41.716392646Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=56f9cb46-fddc-4d07-a69f-7ba6141dcca9 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 02:17:41 ingress-addon-legacy-473102 crio[715]: time="2024-01-16 02:17:41.716687447Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:eb2c9c25ba0ad861b15ef23317f9519315964cf505027c2d5eda611b7fb44bd4,PodSandboxId:4785e129fa4c94170bfb9adecef507e27d0ea79fcc6295f65f5d4b9560369534,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1705371449806774774,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-bzv6p,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f4ce4150-d5aa-4b63-88e3-61afdf92b4a7,},Annotations:map[string]string{io.kubernetes.container.hash: 20fa498a,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15e48d483028af4523fff35f0ffde8e3ca61abe9a5bfcdb958648acb2a9aff8a,PodSandboxId:b966bf8f47d09469cef897951c80498c762439a6ce31a5f731fd8e20992a5b52,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686,State:CONTAINER_RUNNING,CreatedAt:1705371309796717934,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e16f5d1e-a4fe-4dbc-807c-3f975fc47d17,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 7b585abe,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:978a166384cd88da85d6949b29111b1ba492f67b55ef6c106dd277667edbcc8a,PodSandboxId:6e684e8949b82b36cd24d23d6a9ecea44b3abc1a3894a14b34d2d015967f06e4,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1705371288429671218,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-dj8gn,io
.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 462b33d2-52e1-4275-93bd-1d1260e76b8e,},Annotations:map[string]string{io.kubernetes.container.hash: ff615a71,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:214ef7677e6b6d2557bc67fce76a26248d8eb98c16d941e82f9c275fb1ae20a7,PodSandboxId:e04e18bd440dbf5e35ac8b3ad1b34c1b27d1b182fb36088edfa50fdc816416dc,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34e
a58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1705371279269637750,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-hl5st,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 5087cc94-cad1-49b9-ae78-ea13a0d9738b,},Annotations:map[string]string{io.kubernetes.container.hash: 712796a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da9f82a47fa83295acac79395e15df6831c75232a6c37c6244b7a1f6bf98a415,PodSandboxId:1d8a5c508f61c4b2ebc6f459dca947beada5070e3df824adbd07e44ea75dc9d3,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook
-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1705371279120153630,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-8vd7t,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 68342bac-b036-45f5-9179-d581fd7a064f,},Annotations:map[string]string{io.kubernetes.container.hash: 25203402,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb543371d90966dc03f548313f11a80f69be6e2f8e4ea51a88cd2041b6d60356,PodSandboxId:9640b04441e113250b92db11f3d554338f56a15514494d1da0484a04a66cf56f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec
{Image:67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1705371272554169144,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-tvdh2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e86570dc-e510-4c31-9ced-84efde5bb99e,},Annotations:map[string]string{io.kubernetes.container.hash: 75abd921,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:791a26fdca6ec1ca78d50444f3e
dad16c131621ed8d2016eb728217fabfe4935,PodSandboxId:54a93bdf03c62e6bb0c668e924b3f15bded3c3449f4da687774c9c2f89e0b27b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705371272428847883,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b6fc219-9722-484b-a781-81e992240a24,},Annotations:map[string]string{io.kubernetes.container.hash: f3aae716,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4201636fd059b5977e28d4ce55a7
47cf8a8a3418a0c2928d754af8705e89b822,PodSandboxId:ccc0e749ecdf9cc0b0bc4dcbd600c2d46befc58337fd0b92287cf309dc598052,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1705371270687787220,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-27x9r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 737bd5cd-321a-4a5d-b1b8-5f987660e261,},Annotations:map[string]string{io.kubernetes.container.hash: cc8d9790,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cc168017cabda8ab4d95827960bd08ae1f1ee20d045bc7d4da9aed15c22224f,Pod
SandboxId:a1c769d3b90f924488a809138f247b613b97eb387f5ebfb3af055dc419d5ebf3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1705371245538159457,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-473102,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83708af28a8ac1b780546fe7ced5816f,},Annotations:map[string]string{io.kubernetes.container.hash: 53a4be4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01485164bf74ce3289c4e8f2f66a1b3a2fa2fe3aef84bd606df5e5bd42d7b125,PodSandboxId:4abde0dd645c97595411764e9b34c7cf0c762
a037c9ce268c69c81da3773aed7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1705371243746310099,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-473102,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc6058fad7a63b72cb59e71e21dadb7a54f22a4ed72b0e95433f9a999bf7ceb0,PodSandboxId:6e4eae23d43f8e6fd3c89ce3a6625a5a7d870d419bc
eccbd5799856fd5ff6f7b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1705371243728754570,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-473102,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89d5ad6015c4dbbdec2934c5e580f100,},Annotations:map[string]string{io.kubernetes.container.hash: 110ba269,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0a993e57629c5a4205deff31368a838950fd199bd5f8e0f820afab3c6b4d899,PodSandboxId:c89039cef671af57aef81afa652d12089e3e373a99dab4897
bb4861af5b597a9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1705371243625617677,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-473102,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=56f9cb46-fddc-4d07-a69f-7ba6141dcca9 name=/runtime.v1.RuntimeServ
ice/ListContainers
	Jan 16 02:17:41 ingress-addon-legacy-473102 crio[715]: time="2024-01-16 02:17:41.764792336Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=f71b2a99-0bde-430d-8044-4423a0123635 name=/runtime.v1.RuntimeService/Version
	Jan 16 02:17:41 ingress-addon-legacy-473102 crio[715]: time="2024-01-16 02:17:41.764887343Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=f71b2a99-0bde-430d-8044-4423a0123635 name=/runtime.v1.RuntimeService/Version
	Jan 16 02:17:41 ingress-addon-legacy-473102 crio[715]: time="2024-01-16 02:17:41.767141315Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=2649afb0-851c-4a15-91b0-2de023c53d0d name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 02:17:41 ingress-addon-legacy-473102 crio[715]: time="2024-01-16 02:17:41.767607317Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705371461767594929,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:202825,},InodesUsed:&UInt64Value{Value:85,},},},}" file="go-grpc-middleware/chain.go:25" id=2649afb0-851c-4a15-91b0-2de023c53d0d name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 02:17:41 ingress-addon-legacy-473102 crio[715]: time="2024-01-16 02:17:41.768621083Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=8e1488b1-4596-42b9-9b81-96899bd15c0e name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 02:17:41 ingress-addon-legacy-473102 crio[715]: time="2024-01-16 02:17:41.768703544Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=8e1488b1-4596-42b9-9b81-96899bd15c0e name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 02:17:41 ingress-addon-legacy-473102 crio[715]: time="2024-01-16 02:17:41.769100602Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:eb2c9c25ba0ad861b15ef23317f9519315964cf505027c2d5eda611b7fb44bd4,PodSandboxId:4785e129fa4c94170bfb9adecef507e27d0ea79fcc6295f65f5d4b9560369534,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1705371449806774774,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-bzv6p,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f4ce4150-d5aa-4b63-88e3-61afdf92b4a7,},Annotations:map[string]string{io.kubernetes.container.hash: 20fa498a,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15e48d483028af4523fff35f0ffde8e3ca61abe9a5bfcdb958648acb2a9aff8a,PodSandboxId:b966bf8f47d09469cef897951c80498c762439a6ce31a5f731fd8e20992a5b52,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686,State:CONTAINER_RUNNING,CreatedAt:1705371309796717934,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e16f5d1e-a4fe-4dbc-807c-3f975fc47d17,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 7b585abe,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:978a166384cd88da85d6949b29111b1ba492f67b55ef6c106dd277667edbcc8a,PodSandboxId:6e684e8949b82b36cd24d23d6a9ecea44b3abc1a3894a14b34d2d015967f06e4,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1705371288429671218,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-dj8gn,io
.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 462b33d2-52e1-4275-93bd-1d1260e76b8e,},Annotations:map[string]string{io.kubernetes.container.hash: ff615a71,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:214ef7677e6b6d2557bc67fce76a26248d8eb98c16d941e82f9c275fb1ae20a7,PodSandboxId:e04e18bd440dbf5e35ac8b3ad1b34c1b27d1b182fb36088edfa50fdc816416dc,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34e
a58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1705371279269637750,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-hl5st,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 5087cc94-cad1-49b9-ae78-ea13a0d9738b,},Annotations:map[string]string{io.kubernetes.container.hash: 712796a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da9f82a47fa83295acac79395e15df6831c75232a6c37c6244b7a1f6bf98a415,PodSandboxId:1d8a5c508f61c4b2ebc6f459dca947beada5070e3df824adbd07e44ea75dc9d3,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook
-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1705371279120153630,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-8vd7t,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 68342bac-b036-45f5-9179-d581fd7a064f,},Annotations:map[string]string{io.kubernetes.container.hash: 25203402,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb543371d90966dc03f548313f11a80f69be6e2f8e4ea51a88cd2041b6d60356,PodSandboxId:9640b04441e113250b92db11f3d554338f56a15514494d1da0484a04a66cf56f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec
{Image:67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1705371272554169144,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-tvdh2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e86570dc-e510-4c31-9ced-84efde5bb99e,},Annotations:map[string]string{io.kubernetes.container.hash: 75abd921,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:791a26fdca6ec1ca78d50444f3e
dad16c131621ed8d2016eb728217fabfe4935,PodSandboxId:54a93bdf03c62e6bb0c668e924b3f15bded3c3449f4da687774c9c2f89e0b27b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705371272428847883,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b6fc219-9722-484b-a781-81e992240a24,},Annotations:map[string]string{io.kubernetes.container.hash: f3aae716,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4201636fd059b5977e28d4ce55a7
47cf8a8a3418a0c2928d754af8705e89b822,PodSandboxId:ccc0e749ecdf9cc0b0bc4dcbd600c2d46befc58337fd0b92287cf309dc598052,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1705371270687787220,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-27x9r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 737bd5cd-321a-4a5d-b1b8-5f987660e261,},Annotations:map[string]string{io.kubernetes.container.hash: cc8d9790,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cc168017cabda8ab4d95827960bd08ae1f1ee20d045bc7d4da9aed15c22224f,Pod
SandboxId:a1c769d3b90f924488a809138f247b613b97eb387f5ebfb3af055dc419d5ebf3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1705371245538159457,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-473102,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83708af28a8ac1b780546fe7ced5816f,},Annotations:map[string]string{io.kubernetes.container.hash: 53a4be4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01485164bf74ce3289c4e8f2f66a1b3a2fa2fe3aef84bd606df5e5bd42d7b125,PodSandboxId:4abde0dd645c97595411764e9b34c7cf0c762
a037c9ce268c69c81da3773aed7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1705371243746310099,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-473102,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc6058fad7a63b72cb59e71e21dadb7a54f22a4ed72b0e95433f9a999bf7ceb0,PodSandboxId:6e4eae23d43f8e6fd3c89ce3a6625a5a7d870d419bc
eccbd5799856fd5ff6f7b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1705371243728754570,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-473102,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89d5ad6015c4dbbdec2934c5e580f100,},Annotations:map[string]string{io.kubernetes.container.hash: 110ba269,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0a993e57629c5a4205deff31368a838950fd199bd5f8e0f820afab3c6b4d899,PodSandboxId:c89039cef671af57aef81afa652d12089e3e373a99dab4897
bb4861af5b597a9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1705371243625617677,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-473102,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=8e1488b1-4596-42b9-9b81-96899bd15c0e name=/runtime.v1.RuntimeServ
ice/ListContainers
	Jan 16 02:17:41 ingress-addon-legacy-473102 crio[715]: time="2024-01-16 02:17:41.811824122Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=9c31181f-8d7b-4d6c-bee1-d951417267b9 name=/runtime.v1.RuntimeService/Version
	Jan 16 02:17:41 ingress-addon-legacy-473102 crio[715]: time="2024-01-16 02:17:41.811911965Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=9c31181f-8d7b-4d6c-bee1-d951417267b9 name=/runtime.v1.RuntimeService/Version
	Jan 16 02:17:41 ingress-addon-legacy-473102 crio[715]: time="2024-01-16 02:17:41.813458966Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=61a2bd32-c682-4ff6-a3a7-050de469a12d name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 02:17:41 ingress-addon-legacy-473102 crio[715]: time="2024-01-16 02:17:41.814201847Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705371461814181423,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:202825,},InodesUsed:&UInt64Value{Value:85,},},},}" file="go-grpc-middleware/chain.go:25" id=61a2bd32-c682-4ff6-a3a7-050de469a12d name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 02:17:41 ingress-addon-legacy-473102 crio[715]: time="2024-01-16 02:17:41.815871868Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=17a287c7-a2bd-4b00-be9d-0117d681e545 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 02:17:41 ingress-addon-legacy-473102 crio[715]: time="2024-01-16 02:17:41.815933026Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=17a287c7-a2bd-4b00-be9d-0117d681e545 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 02:17:41 ingress-addon-legacy-473102 crio[715]: time="2024-01-16 02:17:41.816248763Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:eb2c9c25ba0ad861b15ef23317f9519315964cf505027c2d5eda611b7fb44bd4,PodSandboxId:4785e129fa4c94170bfb9adecef507e27d0ea79fcc6295f65f5d4b9560369534,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1705371449806774774,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-bzv6p,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f4ce4150-d5aa-4b63-88e3-61afdf92b4a7,},Annotations:map[string]string{io.kubernetes.container.hash: 20fa498a,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15e48d483028af4523fff35f0ffde8e3ca61abe9a5bfcdb958648acb2a9aff8a,PodSandboxId:b966bf8f47d09469cef897951c80498c762439a6ce31a5f731fd8e20992a5b52,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686,State:CONTAINER_RUNNING,CreatedAt:1705371309796717934,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e16f5d1e-a4fe-4dbc-807c-3f975fc47d17,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 7b585abe,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:978a166384cd88da85d6949b29111b1ba492f67b55ef6c106dd277667edbcc8a,PodSandboxId:6e684e8949b82b36cd24d23d6a9ecea44b3abc1a3894a14b34d2d015967f06e4,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1705371288429671218,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-dj8gn,io
.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 462b33d2-52e1-4275-93bd-1d1260e76b8e,},Annotations:map[string]string{io.kubernetes.container.hash: ff615a71,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:214ef7677e6b6d2557bc67fce76a26248d8eb98c16d941e82f9c275fb1ae20a7,PodSandboxId:e04e18bd440dbf5e35ac8b3ad1b34c1b27d1b182fb36088edfa50fdc816416dc,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34e
a58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1705371279269637750,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-hl5st,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 5087cc94-cad1-49b9-ae78-ea13a0d9738b,},Annotations:map[string]string{io.kubernetes.container.hash: 712796a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da9f82a47fa83295acac79395e15df6831c75232a6c37c6244b7a1f6bf98a415,PodSandboxId:1d8a5c508f61c4b2ebc6f459dca947beada5070e3df824adbd07e44ea75dc9d3,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook
-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1705371279120153630,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-8vd7t,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 68342bac-b036-45f5-9179-d581fd7a064f,},Annotations:map[string]string{io.kubernetes.container.hash: 25203402,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb543371d90966dc03f548313f11a80f69be6e2f8e4ea51a88cd2041b6d60356,PodSandboxId:9640b04441e113250b92db11f3d554338f56a15514494d1da0484a04a66cf56f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec
{Image:67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1705371272554169144,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-tvdh2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e86570dc-e510-4c31-9ced-84efde5bb99e,},Annotations:map[string]string{io.kubernetes.container.hash: 75abd921,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:791a26fdca6ec1ca78d50444f3e
dad16c131621ed8d2016eb728217fabfe4935,PodSandboxId:54a93bdf03c62e6bb0c668e924b3f15bded3c3449f4da687774c9c2f89e0b27b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705371272428847883,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b6fc219-9722-484b-a781-81e992240a24,},Annotations:map[string]string{io.kubernetes.container.hash: f3aae716,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4201636fd059b5977e28d4ce55a7
47cf8a8a3418a0c2928d754af8705e89b822,PodSandboxId:ccc0e749ecdf9cc0b0bc4dcbd600c2d46befc58337fd0b92287cf309dc598052,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1705371270687787220,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-27x9r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 737bd5cd-321a-4a5d-b1b8-5f987660e261,},Annotations:map[string]string{io.kubernetes.container.hash: cc8d9790,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cc168017cabda8ab4d95827960bd08ae1f1ee20d045bc7d4da9aed15c22224f,Pod
SandboxId:a1c769d3b90f924488a809138f247b613b97eb387f5ebfb3af055dc419d5ebf3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1705371245538159457,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-473102,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83708af28a8ac1b780546fe7ced5816f,},Annotations:map[string]string{io.kubernetes.container.hash: 53a4be4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01485164bf74ce3289c4e8f2f66a1b3a2fa2fe3aef84bd606df5e5bd42d7b125,PodSandboxId:4abde0dd645c97595411764e9b34c7cf0c762
a037c9ce268c69c81da3773aed7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1705371243746310099,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-473102,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc6058fad7a63b72cb59e71e21dadb7a54f22a4ed72b0e95433f9a999bf7ceb0,PodSandboxId:6e4eae23d43f8e6fd3c89ce3a6625a5a7d870d419bc
eccbd5799856fd5ff6f7b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1705371243728754570,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-473102,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89d5ad6015c4dbbdec2934c5e580f100,},Annotations:map[string]string{io.kubernetes.container.hash: 110ba269,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0a993e57629c5a4205deff31368a838950fd199bd5f8e0f820afab3c6b4d899,PodSandboxId:c89039cef671af57aef81afa652d12089e3e373a99dab4897
bb4861af5b597a9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1705371243625617677,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-473102,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=17a287c7-a2bd-4b00-be9d-0117d681e545 name=/runtime.v1.RuntimeServ
ice/ListContainers
	Jan 16 02:17:41 ingress-addon-legacy-473102 crio[715]: time="2024-01-16 02:17:41.851475922Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=3a573ddd-47e4-468c-a4b1-115f40e562cc name=/runtime.v1.RuntimeService/Version
	Jan 16 02:17:41 ingress-addon-legacy-473102 crio[715]: time="2024-01-16 02:17:41.851569083Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=3a573ddd-47e4-468c-a4b1-115f40e562cc name=/runtime.v1.RuntimeService/Version
	Jan 16 02:17:41 ingress-addon-legacy-473102 crio[715]: time="2024-01-16 02:17:41.853943878Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=7eda32dd-78ae-4db4-9e7c-12777eb3ccc2 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 02:17:41 ingress-addon-legacy-473102 crio[715]: time="2024-01-16 02:17:41.854521645Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705371461854507563,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:202825,},InodesUsed:&UInt64Value{Value:85,},},},}" file="go-grpc-middleware/chain.go:25" id=7eda32dd-78ae-4db4-9e7c-12777eb3ccc2 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 02:17:41 ingress-addon-legacy-473102 crio[715]: time="2024-01-16 02:17:41.855633707Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=15e0fbab-2d21-4fe0-9f2b-79657e8d166f name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 02:17:41 ingress-addon-legacy-473102 crio[715]: time="2024-01-16 02:17:41.855712157Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=15e0fbab-2d21-4fe0-9f2b-79657e8d166f name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 02:17:41 ingress-addon-legacy-473102 crio[715]: time="2024-01-16 02:17:41.855967682Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:eb2c9c25ba0ad861b15ef23317f9519315964cf505027c2d5eda611b7fb44bd4,PodSandboxId:4785e129fa4c94170bfb9adecef507e27d0ea79fcc6295f65f5d4b9560369534,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1705371449806774774,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-bzv6p,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f4ce4150-d5aa-4b63-88e3-61afdf92b4a7,},Annotations:map[string]string{io.kubernetes.container.hash: 20fa498a,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15e48d483028af4523fff35f0ffde8e3ca61abe9a5bfcdb958648acb2a9aff8a,PodSandboxId:b966bf8f47d09469cef897951c80498c762439a6ce31a5f731fd8e20992a5b52,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686,State:CONTAINER_RUNNING,CreatedAt:1705371309796717934,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e16f5d1e-a4fe-4dbc-807c-3f975fc47d17,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 7b585abe,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:978a166384cd88da85d6949b29111b1ba492f67b55ef6c106dd277667edbcc8a,PodSandboxId:6e684e8949b82b36cd24d23d6a9ecea44b3abc1a3894a14b34d2d015967f06e4,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1705371288429671218,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-dj8gn,io
.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 462b33d2-52e1-4275-93bd-1d1260e76b8e,},Annotations:map[string]string{io.kubernetes.container.hash: ff615a71,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:214ef7677e6b6d2557bc67fce76a26248d8eb98c16d941e82f9c275fb1ae20a7,PodSandboxId:e04e18bd440dbf5e35ac8b3ad1b34c1b27d1b182fb36088edfa50fdc816416dc,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34e
a58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1705371279269637750,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-hl5st,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 5087cc94-cad1-49b9-ae78-ea13a0d9738b,},Annotations:map[string]string{io.kubernetes.container.hash: 712796a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da9f82a47fa83295acac79395e15df6831c75232a6c37c6244b7a1f6bf98a415,PodSandboxId:1d8a5c508f61c4b2ebc6f459dca947beada5070e3df824adbd07e44ea75dc9d3,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook
-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1705371279120153630,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-8vd7t,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 68342bac-b036-45f5-9179-d581fd7a064f,},Annotations:map[string]string{io.kubernetes.container.hash: 25203402,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb543371d90966dc03f548313f11a80f69be6e2f8e4ea51a88cd2041b6d60356,PodSandboxId:9640b04441e113250b92db11f3d554338f56a15514494d1da0484a04a66cf56f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec
{Image:67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1705371272554169144,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-tvdh2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e86570dc-e510-4c31-9ced-84efde5bb99e,},Annotations:map[string]string{io.kubernetes.container.hash: 75abd921,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:791a26fdca6ec1ca78d50444f3e
dad16c131621ed8d2016eb728217fabfe4935,PodSandboxId:54a93bdf03c62e6bb0c668e924b3f15bded3c3449f4da687774c9c2f89e0b27b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705371272428847883,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b6fc219-9722-484b-a781-81e992240a24,},Annotations:map[string]string{io.kubernetes.container.hash: f3aae716,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4201636fd059b5977e28d4ce55a7
47cf8a8a3418a0c2928d754af8705e89b822,PodSandboxId:ccc0e749ecdf9cc0b0bc4dcbd600c2d46befc58337fd0b92287cf309dc598052,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1705371270687787220,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-27x9r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 737bd5cd-321a-4a5d-b1b8-5f987660e261,},Annotations:map[string]string{io.kubernetes.container.hash: cc8d9790,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cc168017cabda8ab4d95827960bd08ae1f1ee20d045bc7d4da9aed15c22224f,Pod
SandboxId:a1c769d3b90f924488a809138f247b613b97eb387f5ebfb3af055dc419d5ebf3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1705371245538159457,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-473102,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83708af28a8ac1b780546fe7ced5816f,},Annotations:map[string]string{io.kubernetes.container.hash: 53a4be4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01485164bf74ce3289c4e8f2f66a1b3a2fa2fe3aef84bd606df5e5bd42d7b125,PodSandboxId:4abde0dd645c97595411764e9b34c7cf0c762
a037c9ce268c69c81da3773aed7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1705371243746310099,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-473102,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc6058fad7a63b72cb59e71e21dadb7a54f22a4ed72b0e95433f9a999bf7ceb0,PodSandboxId:6e4eae23d43f8e6fd3c89ce3a6625a5a7d870d419bc
eccbd5799856fd5ff6f7b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1705371243728754570,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-473102,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89d5ad6015c4dbbdec2934c5e580f100,},Annotations:map[string]string{io.kubernetes.container.hash: 110ba269,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0a993e57629c5a4205deff31368a838950fd199bd5f8e0f820afab3c6b4d899,PodSandboxId:c89039cef671af57aef81afa652d12089e3e373a99dab4897
bb4861af5b597a9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1705371243625617677,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-473102,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=15e0fbab-2d21-4fe0-9f2b-79657e8d166f name=/runtime.v1.RuntimeServ
ice/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	eb2c9c25ba0ad       gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7            12 seconds ago      Running             hello-world-app           0                   4785e129fa4c9       hello-world-app-5f5d8b66bb-bzv6p
	15e48d483028a       docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686                    2 minutes ago       Running             nginx                     0                   b966bf8f47d09       nginx
	978a166384cd8       registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324   2 minutes ago       Exited              controller                0                   6e684e8949b82       ingress-nginx-controller-7fcf777cb7-dj8gn
	214ef7677e6b6       docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6     3 minutes ago       Exited              patch                     0                   e04e18bd440db       ingress-nginx-admission-patch-hl5st
	da9f82a47fa83       docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6     3 minutes ago       Exited              create                    0                   1d8a5c508f61c       ingress-nginx-admission-create-8vd7t
	cb543371d9096       67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5                                                   3 minutes ago       Running             coredns                   0                   9640b04441e11       coredns-66bff467f8-tvdh2
	791a26fdca6ec       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                   3 minutes ago       Running             storage-provisioner       0                   54a93bdf03c62       storage-provisioner
	4201636fd059b       27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba                                                   3 minutes ago       Running             kube-proxy                0                   ccc0e749ecdf9       kube-proxy-27x9r
	3cc168017cabd       303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f                                                   3 minutes ago       Running             etcd                      0                   a1c769d3b90f9       etcd-ingress-addon-legacy-473102
	01485164bf74c       a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346                                                   3 minutes ago       Running             kube-scheduler            0                   4abde0dd645c9       kube-scheduler-ingress-addon-legacy-473102
	dc6058fad7a63       7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1                                                   3 minutes ago       Running             kube-apiserver            0                   6e4eae23d43f8       kube-apiserver-ingress-addon-legacy-473102
	c0a993e57629c       e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290                                                   3 minutes ago       Running             kube-controller-manager   0                   c89039cef671a       kube-controller-manager-ingress-addon-legacy-473102
	
	
	==> coredns [cb543371d90966dc03f548313f11a80f69be6e2f8e4ea51a88cd2041b6d60356] <==
	[INFO] 10.244.0.5:49894 - 59607 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000047333s
	[INFO] 10.244.0.5:45462 - 34918 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000092531s
	[INFO] 10.244.0.5:49894 - 6563 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000035032s
	[INFO] 10.244.0.5:45462 - 44933 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000090002s
	[INFO] 10.244.0.5:45462 - 2377 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00010638s
	[INFO] 10.244.0.5:49894 - 43839 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000027291s
	[INFO] 10.244.0.5:45462 - 26529 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000092804s
	[INFO] 10.244.0.5:49894 - 52321 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000036849s
	[INFO] 10.244.0.5:45462 - 27982 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000115598s
	[INFO] 10.244.0.5:49894 - 56791 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00004545s
	[INFO] 10.244.0.5:49894 - 1725 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000036597s
	[INFO] 10.244.0.5:37537 - 20293 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000109517s
	[INFO] 10.244.0.5:35106 - 2399 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.00004202s
	[INFO] 10.244.0.5:35106 - 22485 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000041029s
	[INFO] 10.244.0.5:35106 - 6225 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000025291s
	[INFO] 10.244.0.5:35106 - 27585 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000023475s
	[INFO] 10.244.0.5:35106 - 16391 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000023956s
	[INFO] 10.244.0.5:35106 - 46651 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000021968s
	[INFO] 10.244.0.5:35106 - 32783 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000023922s
	[INFO] 10.244.0.5:37537 - 34248 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000075551s
	[INFO] 10.244.0.5:37537 - 33646 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000093686s
	[INFO] 10.244.0.5:37537 - 18662 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000082973s
	[INFO] 10.244.0.5:37537 - 45284 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000070167s
	[INFO] 10.244.0.5:37537 - 20214 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000038084s
	[INFO] 10.244.0.5:37537 - 39092 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000125327s
	
	
	==> describe nodes <==
	Name:               ingress-addon-legacy-473102
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ingress-addon-legacy-473102
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=880ec6d67bbc7fe8882aaa6543d5a07427c973fd
	                    minikube.k8s.io/name=ingress-addon-legacy-473102
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_16T02_14_12_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Jan 2024 02:14:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-473102
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Jan 2024 02:17:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Jan 2024 02:15:13 +0000   Tue, 16 Jan 2024 02:14:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Jan 2024 02:15:13 +0000   Tue, 16 Jan 2024 02:14:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Jan 2024 02:15:13 +0000   Tue, 16 Jan 2024 02:14:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Jan 2024 02:15:13 +0000   Tue, 16 Jan 2024 02:14:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.44
	  Hostname:    ingress-addon-legacy-473102
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             4012800Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             4012800Ki
	  pods:               110
	System Info:
	  Machine ID:                 8fb68349d71d406ea8d254615c3349e3
	  System UUID:                8fb68349-d71d-406e-a8d2-54615c3349e3
	  Boot ID:                    a58596e9-d436-4ce8-af35-a017470a1e0c
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-bzv6p                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m35s
	  kube-system                 coredns-66bff467f8-tvdh2                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     3m13s
	  kube-system                 etcd-ingress-addon-legacy-473102                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m29s
	  kube-system                 kube-apiserver-ingress-addon-legacy-473102             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m29s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-473102    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m29s
	  kube-system                 kube-proxy-27x9r                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m13s
	  kube-system                 kube-scheduler-ingress-addon-legacy-473102             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m29s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m12s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (32%!)(MISSING)  0 (0%!)(MISSING)
	  memory             70Mi (1%!)(MISSING)   170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 3m40s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m40s (x4 over 3m40s)  kubelet     Node ingress-addon-legacy-473102 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m40s (x3 over 3m40s)  kubelet     Node ingress-addon-legacy-473102 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m40s (x3 over 3m40s)  kubelet     Node ingress-addon-legacy-473102 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m40s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 3m29s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m29s                  kubelet     Node ingress-addon-legacy-473102 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m29s                  kubelet     Node ingress-addon-legacy-473102 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m29s                  kubelet     Node ingress-addon-legacy-473102 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m29s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                3m19s                  kubelet     Node ingress-addon-legacy-473102 status is now: NodeReady
	  Normal  Starting                 3m12s                  kube-proxy  Starting kube-proxy.
	
	
	==> dmesg <==
	[Jan16 02:13] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.094525] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.478881] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.392005] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.150002] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +5.041479] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.127958] systemd-fstab-generator[640]: Ignoring "noauto" for root device
	[  +0.101485] systemd-fstab-generator[651]: Ignoring "noauto" for root device
	[  +0.148334] systemd-fstab-generator[664]: Ignoring "noauto" for root device
	[  +0.107678] systemd-fstab-generator[675]: Ignoring "noauto" for root device
	[  +0.203822] systemd-fstab-generator[699]: Ignoring "noauto" for root device
	[  +8.817840] systemd-fstab-generator[1023]: Ignoring "noauto" for root device
	[Jan16 02:14] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[ +10.391525] systemd-fstab-generator[1421]: Ignoring "noauto" for root device
	[ +18.291408] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.993663] kauditd_printk_skb: 13 callbacks suppressed
	[Jan16 02:15] kauditd_printk_skb: 21 callbacks suppressed
	[Jan16 02:17] kauditd_printk_skb: 5 callbacks suppressed
	[  +7.211912] kauditd_printk_skb: 1 callbacks suppressed
	
	
	==> etcd [3cc168017cabda8ab4d95827960bd08ae1f1ee20d045bc7d4da9aed15c22224f] <==
	raft2024/01/16 02:14:05 INFO: efcba07991c99763 switched to configuration voters=(17279080839334434659)
	2024-01-16 02:14:05.668867 W | auth: simple token is not cryptographically signed
	2024-01-16 02:14:05.673500 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	2024-01-16 02:14:05.675620 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2024-01-16 02:14:05.675799 I | embed: listening for metrics on http://127.0.0.1:2381
	2024-01-16 02:14:05.676206 I | etcdserver: efcba07991c99763 as single-node; fast-forwarding 9 ticks (election ticks 10)
	2024-01-16 02:14:05.676626 I | embed: listening for peers on 192.168.39.44:2380
	raft2024/01/16 02:14:05 INFO: efcba07991c99763 switched to configuration voters=(17279080839334434659)
	2024-01-16 02:14:05.677257 I | etcdserver/membership: added member efcba07991c99763 [https://192.168.39.44:2380] to cluster aad7d4b1c0e48cd8
	raft2024/01/16 02:14:06 INFO: efcba07991c99763 is starting a new election at term 1
	raft2024/01/16 02:14:06 INFO: efcba07991c99763 became candidate at term 2
	raft2024/01/16 02:14:06 INFO: efcba07991c99763 received MsgVoteResp from efcba07991c99763 at term 2
	raft2024/01/16 02:14:06 INFO: efcba07991c99763 became leader at term 2
	raft2024/01/16 02:14:06 INFO: raft.node: efcba07991c99763 elected leader efcba07991c99763 at term 2
	2024-01-16 02:14:06.661666 I | etcdserver: setting up the initial cluster version to 3.4
	2024-01-16 02:14:06.662090 I | embed: ready to serve client requests
	2024-01-16 02:14:06.662764 I | etcdserver: published {Name:ingress-addon-legacy-473102 ClientURLs:[https://192.168.39.44:2379]} to cluster aad7d4b1c0e48cd8
	2024-01-16 02:14:06.662892 I | embed: ready to serve client requests
	2024-01-16 02:14:06.663724 I | embed: serving client requests on 192.168.39.44:2379
	2024-01-16 02:14:06.663964 I | embed: serving client requests on 127.0.0.1:2379
	2024-01-16 02:14:06.665697 N | etcdserver/membership: set the initial cluster version to 3.4
	2024-01-16 02:14:06.665908 I | etcdserver/api: enabled capabilities for version 3.4
	2024-01-16 02:14:28.908542 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/persistent-volume-binder\" " with result "range_response_count:1 size:216" took too long (481.15026ms) to execute
	2024-01-16 02:14:28.908912 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/default/default\" " with result "range_response_count:0 size:5" took too long (119.515282ms) to execute
	2024-01-16 02:15:15.337782 W | etcdserver: read-only range request "key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" " with result "range_response_count:1 size:2213" took too long (279.128124ms) to execute
	
	
	==> kernel <==
	 02:17:42 up 4 min,  0 users,  load average: 0.61, 0.48, 0.22
	Linux ingress-addon-legacy-473102 5.10.57 #1 SMP Thu Dec 28 22:04:21 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [dc6058fad7a63b72cb59e71e21dadb7a54f22a4ed72b0e95433f9a999bf7ceb0] <==
	I0116 02:14:09.813407       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I0116 02:14:09.813518       1 cache.go:39] Caches are synced for autoregister controller
	I0116 02:14:09.813720       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I0116 02:14:09.813769       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0116 02:14:09.825107       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0116 02:14:10.707414       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0116 02:14:10.707466       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0116 02:14:10.722942       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I0116 02:14:10.726414       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I0116 02:14:10.726528       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I0116 02:14:11.237930       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0116 02:14:11.295209       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0116 02:14:11.415576       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.39.44]
	I0116 02:14:11.416579       1 controller.go:609] quota admission added evaluator for: endpoints
	I0116 02:14:11.423334       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0116 02:14:12.069671       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I0116 02:14:12.767103       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I0116 02:14:12.908214       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I0116 02:14:13.214229       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I0116 02:14:28.999442       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I0116 02:14:29.061299       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I0116 02:14:36.844906       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I0116 02:15:06.817979       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	E0116 02:17:34.280768       1 authentication.go:53] Unable to authenticate the request due to an error: [invalid bearer token, Token has been invalidated]
	E0116 02:17:35.185689       1 authentication.go:53] Unable to authenticate the request due to an error: [invalid bearer token, Token has been invalidated]
	
	
	==> kube-controller-manager [c0a993e57629c5a4205deff31368a838950fd199bd5f8e0f820afab3c6b4d899] <==
	W0116 02:14:29.247491       1 node_lifecycle_controller.go:1048] Missing timestamp for Node ingress-addon-legacy-473102. Assuming now as a timestamp.
	I0116 02:14:29.247591       1 node_lifecycle_controller.go:1249] Controller detected that zone  is now in state Normal.
	I0116 02:14:29.247965       1 taint_manager.go:187] Starting NoExecuteTaintManager
	I0116 02:14:29.247977       1 event.go:278] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ingress-addon-legacy-473102", UID:"0596f161-99e4-4868-b077-9a050a4a449e", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node ingress-addon-legacy-473102 event: Registered Node ingress-addon-legacy-473102 in Controller
	I0116 02:14:29.267725       1 shared_informer.go:230] Caches are synced for ClusterRoleAggregator 
	I0116 02:14:29.367500       1 shared_informer.go:230] Caches are synced for disruption 
	I0116 02:14:29.367797       1 disruption.go:339] Sending events to api server.
	I0116 02:14:29.392551       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"611940ef-7d6e-4219-a239-d1b1dfae585c", APIVersion:"apps/v1", ResourceVersion:"360", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set coredns-66bff467f8 to 1
	I0116 02:14:29.453738       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"8d35b51a-3de4-4621-b123-d26ee240b27f", APIVersion:"apps/v1", ResourceVersion:"361", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: coredns-66bff467f8-t76zv
	I0116 02:14:29.465519       1 shared_informer.go:230] Caches are synced for endpoint_slice 
	I0116 02:14:29.516795       1 shared_informer.go:230] Caches are synced for resource quota 
	I0116 02:14:29.523298       1 shared_informer.go:230] Caches are synced for persistent volume 
	I0116 02:14:29.554566       1 shared_informer.go:230] Caches are synced for resource quota 
	I0116 02:14:29.569241       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0116 02:14:29.618880       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0116 02:14:29.618977       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0116 02:14:36.804555       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"a77ed484-7ea9-4746-af6a-458ea68c257f", APIVersion:"apps/v1", ResourceVersion:"450", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I0116 02:14:36.826703       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"f56cb0cb-1af4-4881-a351-a44ea309c412", APIVersion:"apps/v1", ResourceVersion:"451", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-dj8gn
	I0116 02:14:36.896192       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"aeaca76f-2953-4433-80f4-a3a922e02fda", APIVersion:"batch/v1", ResourceVersion:"461", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-8vd7t
	I0116 02:14:36.930823       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"0dedbc6c-04c3-4d16-af6f-166fb93737b2", APIVersion:"batch/v1", ResourceVersion:"468", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-hl5st
	I0116 02:14:39.504779       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"aeaca76f-2953-4433-80f4-a3a922e02fda", APIVersion:"batch/v1", ResourceVersion:"471", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0116 02:14:40.473845       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"0dedbc6c-04c3-4d16-af6f-166fb93737b2", APIVersion:"batch/v1", ResourceVersion:"478", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0116 02:17:26.567455       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"391e33fa-0d51-47a3-b516-a1da67d8f1a1", APIVersion:"apps/v1", ResourceVersion:"673", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	I0116 02:17:26.590271       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"accddfbc-2773-4507-bb97-ee120173a2e9", APIVersion:"apps/v1", ResourceVersion:"674", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-bzv6p
	E0116 02:17:38.881589       1 tokens_controller.go:261] error synchronizing serviceaccount ingress-nginx/default: secrets "default-token-88zg2" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated
	
	
	==> kube-proxy [4201636fd059b5977e28d4ce55a747cf8a8a3418a0c2928d754af8705e89b822] <==
	W0116 02:14:30.872792       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I0116 02:14:30.880510       1 node.go:136] Successfully retrieved node IP: 192.168.39.44
	I0116 02:14:30.880598       1 server_others.go:186] Using iptables Proxier.
	I0116 02:14:30.880987       1 server.go:583] Version: v1.18.20
	I0116 02:14:30.888364       1 config.go:315] Starting service config controller
	I0116 02:14:30.888433       1 shared_informer.go:223] Waiting for caches to sync for service config
	I0116 02:14:30.888486       1 config.go:133] Starting endpoints config controller
	I0116 02:14:30.888497       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I0116 02:14:30.988798       1 shared_informer.go:230] Caches are synced for endpoints config 
	I0116 02:14:30.988876       1 shared_informer.go:230] Caches are synced for service config 
	
	
	==> kube-scheduler [01485164bf74ce3289c4e8f2f66a1b3a2fa2fe3aef84bd606df5e5bd42d7b125] <==
	I0116 02:14:09.836210       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I0116 02:14:09.839528       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I0116 02:14:09.839880       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0116 02:14:09.839920       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0116 02:14:09.839934       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0116 02:14:09.853513       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0116 02:14:09.853647       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0116 02:14:09.853756       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0116 02:14:09.853847       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0116 02:14:09.853882       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0116 02:14:09.853895       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0116 02:14:09.853994       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0116 02:14:09.857951       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0116 02:14:09.858193       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0116 02:14:09.858517       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0116 02:14:09.858703       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0116 02:14:09.859654       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0116 02:14:10.811073       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0116 02:14:10.821653       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0116 02:14:10.934353       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0116 02:14:10.954672       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0116 02:14:10.968840       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0116 02:14:10.994823       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0116 02:14:13.342921       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	E0116 02:14:29.172877       1 factory.go:503] pod: kube-system/coredns-66bff467f8-t76zv is already present in unschedulable queue
	
	
	==> kubelet <==
	-- Journal begins at Tue 2024-01-16 02:13:36 UTC, ends at Tue 2024-01-16 02:17:42 UTC. --
	Jan 16 02:14:41 ingress-addon-legacy-473102 kubelet[1428]: I0116 02:14:41.756622    1428 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5087cc94-cad1-49b9-ae78-ea13a0d9738b-ingress-nginx-admission-token-w9kvd" (OuterVolumeSpecName: "ingress-nginx-admission-token-w9kvd") pod "5087cc94-cad1-49b9-ae78-ea13a0d9738b" (UID: "5087cc94-cad1-49b9-ae78-ea13a0d9738b"). InnerVolumeSpecName "ingress-nginx-admission-token-w9kvd". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jan 16 02:14:41 ingress-addon-legacy-473102 kubelet[1428]: W0116 02:14:41.827478    1428 pod_container_deletor.go:77] Container "e04e18bd440dbf5e35ac8b3ad1b34c1b27d1b182fb36088edfa50fdc816416dc" not found in pod's containers
	Jan 16 02:14:41 ingress-addon-legacy-473102 kubelet[1428]: I0116 02:14:41.848141    1428 reconciler.go:319] Volume detached for volume "ingress-nginx-admission-token-w9kvd" (UniqueName: "kubernetes.io/secret/5087cc94-cad1-49b9-ae78-ea13a0d9738b-ingress-nginx-admission-token-w9kvd") on node "ingress-addon-legacy-473102" DevicePath ""
	Jan 16 02:14:50 ingress-addon-legacy-473102 kubelet[1428]: I0116 02:14:50.154296    1428 topology_manager.go:235] [topologymanager] Topology Admit Handler
	Jan 16 02:14:50 ingress-addon-legacy-473102 kubelet[1428]: I0116 02:14:50.274923    1428 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "minikube-ingress-dns-token-7x8kp" (UniqueName: "kubernetes.io/secret/3ee920cb-336b-4c5b-ba33-0db3ec3099fd-minikube-ingress-dns-token-7x8kp") pod "kube-ingress-dns-minikube" (UID: "3ee920cb-336b-4c5b-ba33-0db3ec3099fd")
	Jan 16 02:15:07 ingress-addon-legacy-473102 kubelet[1428]: I0116 02:15:07.013400    1428 topology_manager.go:235] [topologymanager] Topology Admit Handler
	Jan 16 02:15:07 ingress-addon-legacy-473102 kubelet[1428]: I0116 02:15:07.134940    1428 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-t6ksp" (UniqueName: "kubernetes.io/secret/e16f5d1e-a4fe-4dbc-807c-3f975fc47d17-default-token-t6ksp") pod "nginx" (UID: "e16f5d1e-a4fe-4dbc-807c-3f975fc47d17")
	Jan 16 02:17:26 ingress-addon-legacy-473102 kubelet[1428]: I0116 02:17:26.605004    1428 topology_manager.go:235] [topologymanager] Topology Admit Handler
	Jan 16 02:17:26 ingress-addon-legacy-473102 kubelet[1428]: I0116 02:17:26.742754    1428 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-t6ksp" (UniqueName: "kubernetes.io/secret/f4ce4150-d5aa-4b63-88e3-61afdf92b4a7-default-token-t6ksp") pod "hello-world-app-5f5d8b66bb-bzv6p" (UID: "f4ce4150-d5aa-4b63-88e3-61afdf92b4a7")
	Jan 16 02:17:28 ingress-addon-legacy-473102 kubelet[1428]: I0116 02:17:28.109640    1428 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 70d3da7e31e3559e45326c12e70e5bdeec9fec0ea6c256ddd679d4449cb84c71
	Jan 16 02:17:28 ingress-addon-legacy-473102 kubelet[1428]: I0116 02:17:28.251416    1428 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-7x8kp" (UniqueName: "kubernetes.io/secret/3ee920cb-336b-4c5b-ba33-0db3ec3099fd-minikube-ingress-dns-token-7x8kp") pod "3ee920cb-336b-4c5b-ba33-0db3ec3099fd" (UID: "3ee920cb-336b-4c5b-ba33-0db3ec3099fd")
	Jan 16 02:17:28 ingress-addon-legacy-473102 kubelet[1428]: I0116 02:17:28.265591    1428 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ee920cb-336b-4c5b-ba33-0db3ec3099fd-minikube-ingress-dns-token-7x8kp" (OuterVolumeSpecName: "minikube-ingress-dns-token-7x8kp") pod "3ee920cb-336b-4c5b-ba33-0db3ec3099fd" (UID: "3ee920cb-336b-4c5b-ba33-0db3ec3099fd"). InnerVolumeSpecName "minikube-ingress-dns-token-7x8kp". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jan 16 02:17:28 ingress-addon-legacy-473102 kubelet[1428]: I0116 02:17:28.351722    1428 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-7x8kp" (UniqueName: "kubernetes.io/secret/3ee920cb-336b-4c5b-ba33-0db3ec3099fd-minikube-ingress-dns-token-7x8kp") on node "ingress-addon-legacy-473102" DevicePath ""
	Jan 16 02:17:28 ingress-addon-legacy-473102 kubelet[1428]: I0116 02:17:28.589284    1428 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 70d3da7e31e3559e45326c12e70e5bdeec9fec0ea6c256ddd679d4449cb84c71
	Jan 16 02:17:28 ingress-addon-legacy-473102 kubelet[1428]: E0116 02:17:28.590126    1428 remote_runtime.go:295] ContainerStatus "70d3da7e31e3559e45326c12e70e5bdeec9fec0ea6c256ddd679d4449cb84c71" from runtime service failed: rpc error: code = NotFound desc = could not find container "70d3da7e31e3559e45326c12e70e5bdeec9fec0ea6c256ddd679d4449cb84c71": container with ID starting with 70d3da7e31e3559e45326c12e70e5bdeec9fec0ea6c256ddd679d4449cb84c71 not found: ID does not exist
	Jan 16 02:17:34 ingress-addon-legacy-473102 kubelet[1428]: E0116 02:17:34.259085    1428 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-dj8gn.17aab24d5cc62555", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-dj8gn", UID:"462b33d2-52e1-4275-93bd-1d1260e76b8e", APIVersion:"v1", ResourceVersion:"459", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-473102"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc16197af8f033955, ext:201537029188, loc:(*time.Location)(0x701e5a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc16197af8f033955, ext:201537029188, loc:(*time.Location)(0x701e5a0)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-dj8gn.17aab24d5cc62555" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Jan 16 02:17:34 ingress-addon-legacy-473102 kubelet[1428]: E0116 02:17:34.291215    1428 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-dj8gn.17aab24d5cc62555", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-dj8gn", UID:"462b33d2-52e1-4275-93bd-1d1260e76b8e", APIVersion:"v1", ResourceVersion:"459", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-473102"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc16197af8f033955, ext:201537029188, loc:(*time.Location)(0x701e5a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc16197af909c56d5, ext:201563840965, loc:(*time.Location)(0x701e5a0)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-dj8gn.17aab24d5cc62555" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Jan 16 02:17:37 ingress-addon-legacy-473102 kubelet[1428]: W0116 02:17:37.172402    1428 pod_container_deletor.go:77] Container "6e684e8949b82b36cd24d23d6a9ecea44b3abc1a3894a14b34d2d015967f06e4" not found in pod's containers
	Jan 16 02:17:38 ingress-addon-legacy-473102 kubelet[1428]: I0116 02:17:38.390149    1428 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/462b33d2-52e1-4275-93bd-1d1260e76b8e-webhook-cert") pod "462b33d2-52e1-4275-93bd-1d1260e76b8e" (UID: "462b33d2-52e1-4275-93bd-1d1260e76b8e")
	Jan 16 02:17:38 ingress-addon-legacy-473102 kubelet[1428]: I0116 02:17:38.390209    1428 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-72nk4" (UniqueName: "kubernetes.io/secret/462b33d2-52e1-4275-93bd-1d1260e76b8e-ingress-nginx-token-72nk4") pod "462b33d2-52e1-4275-93bd-1d1260e76b8e" (UID: "462b33d2-52e1-4275-93bd-1d1260e76b8e")
	Jan 16 02:17:38 ingress-addon-legacy-473102 kubelet[1428]: I0116 02:17:38.394762    1428 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/462b33d2-52e1-4275-93bd-1d1260e76b8e-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "462b33d2-52e1-4275-93bd-1d1260e76b8e" (UID: "462b33d2-52e1-4275-93bd-1d1260e76b8e"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jan 16 02:17:38 ingress-addon-legacy-473102 kubelet[1428]: I0116 02:17:38.395106    1428 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/462b33d2-52e1-4275-93bd-1d1260e76b8e-ingress-nginx-token-72nk4" (OuterVolumeSpecName: "ingress-nginx-token-72nk4") pod "462b33d2-52e1-4275-93bd-1d1260e76b8e" (UID: "462b33d2-52e1-4275-93bd-1d1260e76b8e"). InnerVolumeSpecName "ingress-nginx-token-72nk4". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jan 16 02:17:38 ingress-addon-legacy-473102 kubelet[1428]: I0116 02:17:38.490684    1428 reconciler.go:319] Volume detached for volume "ingress-nginx-token-72nk4" (UniqueName: "kubernetes.io/secret/462b33d2-52e1-4275-93bd-1d1260e76b8e-ingress-nginx-token-72nk4") on node "ingress-addon-legacy-473102" DevicePath ""
	Jan 16 02:17:38 ingress-addon-legacy-473102 kubelet[1428]: I0116 02:17:38.490725    1428 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/462b33d2-52e1-4275-93bd-1d1260e76b8e-webhook-cert") on node "ingress-addon-legacy-473102" DevicePath ""
	Jan 16 02:17:39 ingress-addon-legacy-473102 kubelet[1428]: W0116 02:17:39.313918    1428 kubelet_getters.go:297] Path "/var/lib/kubelet/pods/462b33d2-52e1-4275-93bd-1d1260e76b8e/volumes" does not exist
	
	
	==> storage-provisioner [791a26fdca6ec1ca78d50444f3edad16c131621ed8d2016eb728217fabfe4935] <==
	I0116 02:14:32.642366       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0116 02:14:32.667516       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0116 02:14:32.668592       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0116 02:14:32.678507       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0116 02:14:32.681668       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-473102_6c618444-7b97-4889-9f60-166fad94c6ac!
	I0116 02:14:32.681913       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"60eaa708-a846-4c13-b27d-1d6f5784b89e", APIVersion:"v1", ResourceVersion:"403", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-473102_6c618444-7b97-4889-9f60-166fad94c6ac became leader
	I0116 02:14:32.782352       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-473102_6c618444-7b97-4889-9f60-166fad94c6ac!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ingress-addon-legacy-473102 -n ingress-addon-legacy-473102
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-473102 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (172.76s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (3.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:580: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-835787 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:588: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-835787 -- exec busybox-5bc68d56bd-f6p29 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-835787 -- exec busybox-5bc68d56bd-f6p29 -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:599: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-835787 -- exec busybox-5bc68d56bd-f6p29 -- sh -c "ping -c 1 192.168.39.1": exit status 1 (193.941413ms)

                                                
                                                
-- stdout --
	PING 192.168.39.1 (192.168.39.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:600: Failed to ping host (192.168.39.1) from pod (busybox-5bc68d56bd-f6p29): exit status 1
multinode_test.go:588: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-835787 -- exec busybox-5bc68d56bd-hzzdv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-835787 -- exec busybox-5bc68d56bd-hzzdv -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:599: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-835787 -- exec busybox-5bc68d56bd-hzzdv -- sh -c "ping -c 1 192.168.39.1": exit status 1 (220.835327ms)

                                                
                                                
-- stdout --
	PING 192.168.39.1 (192.168.39.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:600: Failed to ping host (192.168.39.1) from pod (busybox-5bc68d56bd-hzzdv): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-835787 -n multinode-835787
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-835787 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-835787 logs -n 25: (1.393193416s)
E0116 02:24:50.170256  978482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/ingress-addon-legacy-473102/client.crt: no such file or directory
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | mount-start-2-732370 ssh -- ls                    | mount-start-2-732370 | jenkins | v1.32.0 | 16 Jan 24 02:22 UTC | 16 Jan 24 02:22 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| ssh     | mount-start-2-732370 ssh --                       | mount-start-2-732370 | jenkins | v1.32.0 | 16 Jan 24 02:22 UTC | 16 Jan 24 02:22 UTC |
	|         | mount | grep 9p                                   |                      |         |         |                     |                     |
	| stop    | -p mount-start-2-732370                           | mount-start-2-732370 | jenkins | v1.32.0 | 16 Jan 24 02:22 UTC | 16 Jan 24 02:22 UTC |
	| start   | -p mount-start-2-732370                           | mount-start-2-732370 | jenkins | v1.32.0 | 16 Jan 24 02:22 UTC | 16 Jan 24 02:22 UTC |
	| mount   | /home/jenkins:/minikube-host                      | mount-start-2-732370 | jenkins | v1.32.0 | 16 Jan 24 02:22 UTC |                     |
	|         | --profile mount-start-2-732370                    |                      |         |         |                     |                     |
	|         | --v 0 --9p-version 9p2000.L                       |                      |         |         |                     |                     |
	|         | --gid 0 --ip  --msize 6543                        |                      |         |         |                     |                     |
	|         | --port 46465 --type 9p --uid 0                    |                      |         |         |                     |                     |
	| ssh     | mount-start-2-732370 ssh -- ls                    | mount-start-2-732370 | jenkins | v1.32.0 | 16 Jan 24 02:22 UTC | 16 Jan 24 02:22 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| ssh     | mount-start-2-732370 ssh --                       | mount-start-2-732370 | jenkins | v1.32.0 | 16 Jan 24 02:22 UTC | 16 Jan 24 02:22 UTC |
	|         | mount | grep 9p                                   |                      |         |         |                     |                     |
	| delete  | -p mount-start-2-732370                           | mount-start-2-732370 | jenkins | v1.32.0 | 16 Jan 24 02:22 UTC | 16 Jan 24 02:22 UTC |
	| delete  | -p mount-start-1-715346                           | mount-start-1-715346 | jenkins | v1.32.0 | 16 Jan 24 02:22 UTC | 16 Jan 24 02:22 UTC |
	| start   | -p multinode-835787                               | multinode-835787     | jenkins | v1.32.0 | 16 Jan 24 02:22 UTC | 16 Jan 24 02:24 UTC |
	|         | --wait=true --memory=2200                         |                      |         |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr                                 |                      |         |         |                     |                     |
	|         | --driver=kvm2                                     |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| kubectl | -p multinode-835787 -- apply -f                   | multinode-835787     | jenkins | v1.32.0 | 16 Jan 24 02:24 UTC | 16 Jan 24 02:24 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |         |         |                     |                     |
	| kubectl | -p multinode-835787 -- rollout                    | multinode-835787     | jenkins | v1.32.0 | 16 Jan 24 02:24 UTC | 16 Jan 24 02:24 UTC |
	|         | status deployment/busybox                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-835787 -- get pods -o                | multinode-835787     | jenkins | v1.32.0 | 16 Jan 24 02:24 UTC | 16 Jan 24 02:24 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-835787 -- get pods -o                | multinode-835787     | jenkins | v1.32.0 | 16 Jan 24 02:24 UTC | 16 Jan 24 02:24 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-835787 -- exec                       | multinode-835787     | jenkins | v1.32.0 | 16 Jan 24 02:24 UTC | 16 Jan 24 02:24 UTC |
	|         | busybox-5bc68d56bd-f6p29 --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-835787 -- exec                       | multinode-835787     | jenkins | v1.32.0 | 16 Jan 24 02:24 UTC | 16 Jan 24 02:24 UTC |
	|         | busybox-5bc68d56bd-hzzdv --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-835787 -- exec                       | multinode-835787     | jenkins | v1.32.0 | 16 Jan 24 02:24 UTC | 16 Jan 24 02:24 UTC |
	|         | busybox-5bc68d56bd-f6p29 --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-835787 -- exec                       | multinode-835787     | jenkins | v1.32.0 | 16 Jan 24 02:24 UTC | 16 Jan 24 02:24 UTC |
	|         | busybox-5bc68d56bd-hzzdv --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-835787 -- exec                       | multinode-835787     | jenkins | v1.32.0 | 16 Jan 24 02:24 UTC | 16 Jan 24 02:24 UTC |
	|         | busybox-5bc68d56bd-f6p29 -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-835787 -- exec                       | multinode-835787     | jenkins | v1.32.0 | 16 Jan 24 02:24 UTC | 16 Jan 24 02:24 UTC |
	|         | busybox-5bc68d56bd-hzzdv -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-835787 -- get pods -o                | multinode-835787     | jenkins | v1.32.0 | 16 Jan 24 02:24 UTC | 16 Jan 24 02:24 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-835787 -- exec                       | multinode-835787     | jenkins | v1.32.0 | 16 Jan 24 02:24 UTC | 16 Jan 24 02:24 UTC |
	|         | busybox-5bc68d56bd-f6p29                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-835787 -- exec                       | multinode-835787     | jenkins | v1.32.0 | 16 Jan 24 02:24 UTC |                     |
	|         | busybox-5bc68d56bd-f6p29 -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.39.1                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-835787 -- exec                       | multinode-835787     | jenkins | v1.32.0 | 16 Jan 24 02:24 UTC | 16 Jan 24 02:24 UTC |
	|         | busybox-5bc68d56bd-hzzdv                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-835787 -- exec                       | multinode-835787     | jenkins | v1.32.0 | 16 Jan 24 02:24 UTC |                     |
	|         | busybox-5bc68d56bd-hzzdv -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.39.1                         |                      |         |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/16 02:22:47
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0116 02:22:47.315664  991718 out.go:296] Setting OutFile to fd 1 ...
	I0116 02:22:47.315931  991718 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 02:22:47.315940  991718 out.go:309] Setting ErrFile to fd 2...
	I0116 02:22:47.315945  991718 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 02:22:47.316123  991718 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17967-971255/.minikube/bin
	I0116 02:22:47.316760  991718 out.go:303] Setting JSON to false
	I0116 02:22:47.317865  991718 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":11117,"bootTime":1705360651,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1048-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0116 02:22:47.317938  991718 start.go:138] virtualization: kvm guest
	I0116 02:22:47.320413  991718 out.go:177] * [multinode-835787] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0116 02:22:47.321873  991718 notify.go:220] Checking for updates...
	I0116 02:22:47.321906  991718 out.go:177]   - MINIKUBE_LOCATION=17967
	I0116 02:22:47.323410  991718 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0116 02:22:47.324907  991718 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17967-971255/kubeconfig
	I0116 02:22:47.326519  991718 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17967-971255/.minikube
	I0116 02:22:47.327940  991718 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0116 02:22:47.329425  991718 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0116 02:22:47.331004  991718 driver.go:392] Setting default libvirt URI to qemu:///system
	I0116 02:22:47.369956  991718 out.go:177] * Using the kvm2 driver based on user configuration
	I0116 02:22:47.371320  991718 start.go:298] selected driver: kvm2
	I0116 02:22:47.371336  991718 start.go:902] validating driver "kvm2" against <nil>
	I0116 02:22:47.371352  991718 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0116 02:22:47.372062  991718 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0116 02:22:47.372176  991718 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17967-971255/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0116 02:22:47.388890  991718 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0116 02:22:47.389018  991718 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0116 02:22:47.389287  991718 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0116 02:22:47.389358  991718 cni.go:84] Creating CNI manager for ""
	I0116 02:22:47.389374  991718 cni.go:136] 0 nodes found, recommending kindnet
	I0116 02:22:47.389388  991718 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I0116 02:22:47.389416  991718 start_flags.go:321] config:
	{Name:multinode-835787 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-835787 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 02:22:47.389599  991718 iso.go:125] acquiring lock: {Name:mkc0ce0b4b435c5fb7570521b794e5982bb7bf6d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0116 02:22:47.391702  991718 out.go:177] * Starting control plane node multinode-835787 in cluster multinode-835787
	I0116 02:22:47.393367  991718 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0116 02:22:47.393424  991718 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17967-971255/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0116 02:22:47.393438  991718 cache.go:56] Caching tarball of preloaded images
	I0116 02:22:47.393537  991718 preload.go:174] Found /home/jenkins/minikube-integration/17967-971255/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0116 02:22:47.393556  991718 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0116 02:22:47.393939  991718 profile.go:148] Saving config to /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/multinode-835787/config.json ...
	I0116 02:22:47.393976  991718 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/multinode-835787/config.json: {Name:mk996abe906449eb1529a6974ae3d61ac3097198 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:22:47.394142  991718 start.go:365] acquiring machines lock for multinode-835787: {Name:mk61734672272adcae1bdaf20e72828e69d219db Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0116 02:22:47.394186  991718 start.go:369] acquired machines lock for "multinode-835787" in 26.679µs
	I0116 02:22:47.394223  991718 start.go:93] Provisioning new machine with config: &{Name:multinode-835787 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.28.4 ClusterName:multinode-835787 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 M
ountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0116 02:22:47.394305  991718 start.go:125] createHost starting for "" (driver="kvm2")
	I0116 02:22:47.396274  991718 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0116 02:22:47.396433  991718 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 02:22:47.396488  991718 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:22:47.412123  991718 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40203
	I0116 02:22:47.412595  991718 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:22:47.413260  991718 main.go:141] libmachine: Using API Version  1
	I0116 02:22:47.413293  991718 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:22:47.413599  991718 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:22:47.413852  991718 main.go:141] libmachine: (multinode-835787) Calling .GetMachineName
	I0116 02:22:47.414021  991718 main.go:141] libmachine: (multinode-835787) Calling .DriverName
	I0116 02:22:47.414226  991718 start.go:159] libmachine.API.Create for "multinode-835787" (driver="kvm2")
	I0116 02:22:47.414300  991718 client.go:168] LocalClient.Create starting
	I0116 02:22:47.414346  991718 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca.pem
	I0116 02:22:47.414397  991718 main.go:141] libmachine: Decoding PEM data...
	I0116 02:22:47.414416  991718 main.go:141] libmachine: Parsing certificate...
	I0116 02:22:47.414476  991718 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17967-971255/.minikube/certs/cert.pem
	I0116 02:22:47.414498  991718 main.go:141] libmachine: Decoding PEM data...
	I0116 02:22:47.414517  991718 main.go:141] libmachine: Parsing certificate...
	I0116 02:22:47.414538  991718 main.go:141] libmachine: Running pre-create checks...
	I0116 02:22:47.414549  991718 main.go:141] libmachine: (multinode-835787) Calling .PreCreateCheck
	I0116 02:22:47.414911  991718 main.go:141] libmachine: (multinode-835787) Calling .GetConfigRaw
	I0116 02:22:47.415381  991718 main.go:141] libmachine: Creating machine...
	I0116 02:22:47.415398  991718 main.go:141] libmachine: (multinode-835787) Calling .Create
	I0116 02:22:47.415541  991718 main.go:141] libmachine: (multinode-835787) Creating KVM machine...
	I0116 02:22:47.416842  991718 main.go:141] libmachine: (multinode-835787) DBG | found existing default KVM network
	I0116 02:22:47.417598  991718 main.go:141] libmachine: (multinode-835787) DBG | I0116 02:22:47.417437  991741 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000149900}
	I0116 02:22:47.423648  991718 main.go:141] libmachine: (multinode-835787) DBG | trying to create private KVM network mk-multinode-835787 192.168.39.0/24...
	I0116 02:22:47.500293  991718 main.go:141] libmachine: (multinode-835787) DBG | private KVM network mk-multinode-835787 192.168.39.0/24 created
	I0116 02:22:47.500346  991718 main.go:141] libmachine: (multinode-835787) Setting up store path in /home/jenkins/minikube-integration/17967-971255/.minikube/machines/multinode-835787 ...
	I0116 02:22:47.500381  991718 main.go:141] libmachine: (multinode-835787) DBG | I0116 02:22:47.500226  991741 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17967-971255/.minikube
	I0116 02:22:47.500399  991718 main.go:141] libmachine: (multinode-835787) Building disk image from file:///home/jenkins/minikube-integration/17967-971255/.minikube/cache/iso/amd64/minikube-v1.32.1-1703784139-17866-amd64.iso
	I0116 02:22:47.500480  991718 main.go:141] libmachine: (multinode-835787) Downloading /home/jenkins/minikube-integration/17967-971255/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17967-971255/.minikube/cache/iso/amd64/minikube-v1.32.1-1703784139-17866-amd64.iso...
	I0116 02:22:47.734951  991718 main.go:141] libmachine: (multinode-835787) DBG | I0116 02:22:47.734793  991741 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17967-971255/.minikube/machines/multinode-835787/id_rsa...
	I0116 02:22:47.859546  991718 main.go:141] libmachine: (multinode-835787) DBG | I0116 02:22:47.859369  991741 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17967-971255/.minikube/machines/multinode-835787/multinode-835787.rawdisk...
	I0116 02:22:47.859591  991718 main.go:141] libmachine: (multinode-835787) DBG | Writing magic tar header
	I0116 02:22:47.859618  991718 main.go:141] libmachine: (multinode-835787) DBG | Writing SSH key tar header
	I0116 02:22:47.859630  991718 main.go:141] libmachine: (multinode-835787) DBG | I0116 02:22:47.859563  991741 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17967-971255/.minikube/machines/multinode-835787 ...
	I0116 02:22:47.859655  991718 main.go:141] libmachine: (multinode-835787) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17967-971255/.minikube/machines/multinode-835787
	I0116 02:22:47.859684  991718 main.go:141] libmachine: (multinode-835787) Setting executable bit set on /home/jenkins/minikube-integration/17967-971255/.minikube/machines/multinode-835787 (perms=drwx------)
	I0116 02:22:47.859701  991718 main.go:141] libmachine: (multinode-835787) Setting executable bit set on /home/jenkins/minikube-integration/17967-971255/.minikube/machines (perms=drwxr-xr-x)
	I0116 02:22:47.859718  991718 main.go:141] libmachine: (multinode-835787) Setting executable bit set on /home/jenkins/minikube-integration/17967-971255/.minikube (perms=drwxr-xr-x)
	I0116 02:22:47.859734  991718 main.go:141] libmachine: (multinode-835787) Setting executable bit set on /home/jenkins/minikube-integration/17967-971255 (perms=drwxrwxr-x)
	I0116 02:22:47.859749  991718 main.go:141] libmachine: (multinode-835787) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17967-971255/.minikube/machines
	I0116 02:22:47.859769  991718 main.go:141] libmachine: (multinode-835787) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17967-971255/.minikube
	I0116 02:22:47.859784  991718 main.go:141] libmachine: (multinode-835787) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17967-971255
	I0116 02:22:47.859793  991718 main.go:141] libmachine: (multinode-835787) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0116 02:22:47.859808  991718 main.go:141] libmachine: (multinode-835787) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0116 02:22:47.859818  991718 main.go:141] libmachine: (multinode-835787) Creating domain...
	I0116 02:22:47.859833  991718 main.go:141] libmachine: (multinode-835787) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0116 02:22:47.859848  991718 main.go:141] libmachine: (multinode-835787) DBG | Checking permissions on dir: /home/jenkins
	I0116 02:22:47.859899  991718 main.go:141] libmachine: (multinode-835787) DBG | Checking permissions on dir: /home
	I0116 02:22:47.859933  991718 main.go:141] libmachine: (multinode-835787) DBG | Skipping /home - not owner
	I0116 02:22:47.861073  991718 main.go:141] libmachine: (multinode-835787) define libvirt domain using xml: 
	I0116 02:22:47.861099  991718 main.go:141] libmachine: (multinode-835787) <domain type='kvm'>
	I0116 02:22:47.861111  991718 main.go:141] libmachine: (multinode-835787)   <name>multinode-835787</name>
	I0116 02:22:47.861124  991718 main.go:141] libmachine: (multinode-835787)   <memory unit='MiB'>2200</memory>
	I0116 02:22:47.861158  991718 main.go:141] libmachine: (multinode-835787)   <vcpu>2</vcpu>
	I0116 02:22:47.861178  991718 main.go:141] libmachine: (multinode-835787)   <features>
	I0116 02:22:47.861187  991718 main.go:141] libmachine: (multinode-835787)     <acpi/>
	I0116 02:22:47.861195  991718 main.go:141] libmachine: (multinode-835787)     <apic/>
	I0116 02:22:47.861201  991718 main.go:141] libmachine: (multinode-835787)     <pae/>
	I0116 02:22:47.861212  991718 main.go:141] libmachine: (multinode-835787)     
	I0116 02:22:47.861225  991718 main.go:141] libmachine: (multinode-835787)   </features>
	I0116 02:22:47.861239  991718 main.go:141] libmachine: (multinode-835787)   <cpu mode='host-passthrough'>
	I0116 02:22:47.861310  991718 main.go:141] libmachine: (multinode-835787)   
	I0116 02:22:47.861344  991718 main.go:141] libmachine: (multinode-835787)   </cpu>
	I0116 02:22:47.861355  991718 main.go:141] libmachine: (multinode-835787)   <os>
	I0116 02:22:47.861374  991718 main.go:141] libmachine: (multinode-835787)     <type>hvm</type>
	I0116 02:22:47.861393  991718 main.go:141] libmachine: (multinode-835787)     <boot dev='cdrom'/>
	I0116 02:22:47.861411  991718 main.go:141] libmachine: (multinode-835787)     <boot dev='hd'/>
	I0116 02:22:47.861426  991718 main.go:141] libmachine: (multinode-835787)     <bootmenu enable='no'/>
	I0116 02:22:47.861438  991718 main.go:141] libmachine: (multinode-835787)   </os>
	I0116 02:22:47.861452  991718 main.go:141] libmachine: (multinode-835787)   <devices>
	I0116 02:22:47.861466  991718 main.go:141] libmachine: (multinode-835787)     <disk type='file' device='cdrom'>
	I0116 02:22:47.861485  991718 main.go:141] libmachine: (multinode-835787)       <source file='/home/jenkins/minikube-integration/17967-971255/.minikube/machines/multinode-835787/boot2docker.iso'/>
	I0116 02:22:47.861504  991718 main.go:141] libmachine: (multinode-835787)       <target dev='hdc' bus='scsi'/>
	I0116 02:22:47.861529  991718 main.go:141] libmachine: (multinode-835787)       <readonly/>
	I0116 02:22:47.861541  991718 main.go:141] libmachine: (multinode-835787)     </disk>
	I0116 02:22:47.861557  991718 main.go:141] libmachine: (multinode-835787)     <disk type='file' device='disk'>
	I0116 02:22:47.861572  991718 main.go:141] libmachine: (multinode-835787)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0116 02:22:47.861593  991718 main.go:141] libmachine: (multinode-835787)       <source file='/home/jenkins/minikube-integration/17967-971255/.minikube/machines/multinode-835787/multinode-835787.rawdisk'/>
	I0116 02:22:47.861612  991718 main.go:141] libmachine: (multinode-835787)       <target dev='hda' bus='virtio'/>
	I0116 02:22:47.861621  991718 main.go:141] libmachine: (multinode-835787)     </disk>
	I0116 02:22:47.861630  991718 main.go:141] libmachine: (multinode-835787)     <interface type='network'>
	I0116 02:22:47.861645  991718 main.go:141] libmachine: (multinode-835787)       <source network='mk-multinode-835787'/>
	I0116 02:22:47.861658  991718 main.go:141] libmachine: (multinode-835787)       <model type='virtio'/>
	I0116 02:22:47.861680  991718 main.go:141] libmachine: (multinode-835787)     </interface>
	I0116 02:22:47.861697  991718 main.go:141] libmachine: (multinode-835787)     <interface type='network'>
	I0116 02:22:47.861707  991718 main.go:141] libmachine: (multinode-835787)       <source network='default'/>
	I0116 02:22:47.861719  991718 main.go:141] libmachine: (multinode-835787)       <model type='virtio'/>
	I0116 02:22:47.861732  991718 main.go:141] libmachine: (multinode-835787)     </interface>
	I0116 02:22:47.861745  991718 main.go:141] libmachine: (multinode-835787)     <serial type='pty'>
	I0116 02:22:47.861758  991718 main.go:141] libmachine: (multinode-835787)       <target port='0'/>
	I0116 02:22:47.861773  991718 main.go:141] libmachine: (multinode-835787)     </serial>
	I0116 02:22:47.861786  991718 main.go:141] libmachine: (multinode-835787)     <console type='pty'>
	I0116 02:22:47.861794  991718 main.go:141] libmachine: (multinode-835787)       <target type='serial' port='0'/>
	I0116 02:22:47.861830  991718 main.go:141] libmachine: (multinode-835787)     </console>
	I0116 02:22:47.861848  991718 main.go:141] libmachine: (multinode-835787)     <rng model='virtio'>
	I0116 02:22:47.861863  991718 main.go:141] libmachine: (multinode-835787)       <backend model='random'>/dev/random</backend>
	I0116 02:22:47.861874  991718 main.go:141] libmachine: (multinode-835787)     </rng>
	I0116 02:22:47.861884  991718 main.go:141] libmachine: (multinode-835787)     
	I0116 02:22:47.861892  991718 main.go:141] libmachine: (multinode-835787)     
	I0116 02:22:47.861905  991718 main.go:141] libmachine: (multinode-835787)   </devices>
	I0116 02:22:47.861921  991718 main.go:141] libmachine: (multinode-835787) </domain>
	I0116 02:22:47.861944  991718 main.go:141] libmachine: (multinode-835787) 
	I0116 02:22:47.866181  991718 main.go:141] libmachine: (multinode-835787) DBG | domain multinode-835787 has defined MAC address 52:54:00:07:42:bf in network default
	I0116 02:22:47.866725  991718 main.go:141] libmachine: (multinode-835787) Ensuring networks are active...
	I0116 02:22:47.866743  991718 main.go:141] libmachine: (multinode-835787) DBG | domain multinode-835787 has defined MAC address 52:54:00:20:87:3c in network mk-multinode-835787
	I0116 02:22:47.867432  991718 main.go:141] libmachine: (multinode-835787) Ensuring network default is active
	I0116 02:22:47.867847  991718 main.go:141] libmachine: (multinode-835787) Ensuring network mk-multinode-835787 is active
	I0116 02:22:47.868346  991718 main.go:141] libmachine: (multinode-835787) Getting domain xml...
	I0116 02:22:47.869048  991718 main.go:141] libmachine: (multinode-835787) Creating domain...
	I0116 02:22:49.074886  991718 main.go:141] libmachine: (multinode-835787) Waiting to get IP...
	I0116 02:22:49.075830  991718 main.go:141] libmachine: (multinode-835787) DBG | domain multinode-835787 has defined MAC address 52:54:00:20:87:3c in network mk-multinode-835787
	I0116 02:22:49.076274  991718 main.go:141] libmachine: (multinode-835787) DBG | unable to find current IP address of domain multinode-835787 in network mk-multinode-835787
	I0116 02:22:49.076316  991718 main.go:141] libmachine: (multinode-835787) DBG | I0116 02:22:49.076257  991741 retry.go:31] will retry after 264.886677ms: waiting for machine to come up
	I0116 02:22:49.342857  991718 main.go:141] libmachine: (multinode-835787) DBG | domain multinode-835787 has defined MAC address 52:54:00:20:87:3c in network mk-multinode-835787
	I0116 02:22:49.343326  991718 main.go:141] libmachine: (multinode-835787) DBG | unable to find current IP address of domain multinode-835787 in network mk-multinode-835787
	I0116 02:22:49.343352  991718 main.go:141] libmachine: (multinode-835787) DBG | I0116 02:22:49.343283  991741 retry.go:31] will retry after 327.459061ms: waiting for machine to come up
	I0116 02:22:49.673042  991718 main.go:141] libmachine: (multinode-835787) DBG | domain multinode-835787 has defined MAC address 52:54:00:20:87:3c in network mk-multinode-835787
	I0116 02:22:49.673424  991718 main.go:141] libmachine: (multinode-835787) DBG | unable to find current IP address of domain multinode-835787 in network mk-multinode-835787
	I0116 02:22:49.673458  991718 main.go:141] libmachine: (multinode-835787) DBG | I0116 02:22:49.673378  991741 retry.go:31] will retry after 367.691326ms: waiting for machine to come up
	I0116 02:22:50.042991  991718 main.go:141] libmachine: (multinode-835787) DBG | domain multinode-835787 has defined MAC address 52:54:00:20:87:3c in network mk-multinode-835787
	I0116 02:22:50.043448  991718 main.go:141] libmachine: (multinode-835787) DBG | unable to find current IP address of domain multinode-835787 in network mk-multinode-835787
	I0116 02:22:50.043473  991718 main.go:141] libmachine: (multinode-835787) DBG | I0116 02:22:50.043415  991741 retry.go:31] will retry after 499.374147ms: waiting for machine to come up
	I0116 02:22:50.544083  991718 main.go:141] libmachine: (multinode-835787) DBG | domain multinode-835787 has defined MAC address 52:54:00:20:87:3c in network mk-multinode-835787
	I0116 02:22:50.544412  991718 main.go:141] libmachine: (multinode-835787) DBG | unable to find current IP address of domain multinode-835787 in network mk-multinode-835787
	I0116 02:22:50.544449  991718 main.go:141] libmachine: (multinode-835787) DBG | I0116 02:22:50.544362  991741 retry.go:31] will retry after 694.877262ms: waiting for machine to come up
	I0116 02:22:51.241431  991718 main.go:141] libmachine: (multinode-835787) DBG | domain multinode-835787 has defined MAC address 52:54:00:20:87:3c in network mk-multinode-835787
	I0116 02:22:51.241783  991718 main.go:141] libmachine: (multinode-835787) DBG | unable to find current IP address of domain multinode-835787 in network mk-multinode-835787
	I0116 02:22:51.241842  991718 main.go:141] libmachine: (multinode-835787) DBG | I0116 02:22:51.241732  991741 retry.go:31] will retry after 752.519425ms: waiting for machine to come up
	I0116 02:22:51.996217  991718 main.go:141] libmachine: (multinode-835787) DBG | domain multinode-835787 has defined MAC address 52:54:00:20:87:3c in network mk-multinode-835787
	I0116 02:22:51.996570  991718 main.go:141] libmachine: (multinode-835787) DBG | unable to find current IP address of domain multinode-835787 in network mk-multinode-835787
	I0116 02:22:51.996593  991718 main.go:141] libmachine: (multinode-835787) DBG | I0116 02:22:51.996525  991741 retry.go:31] will retry after 865.223885ms: waiting for machine to come up
	I0116 02:22:52.862964  991718 main.go:141] libmachine: (multinode-835787) DBG | domain multinode-835787 has defined MAC address 52:54:00:20:87:3c in network mk-multinode-835787
	I0116 02:22:52.863377  991718 main.go:141] libmachine: (multinode-835787) DBG | unable to find current IP address of domain multinode-835787 in network mk-multinode-835787
	I0116 02:22:52.863418  991718 main.go:141] libmachine: (multinode-835787) DBG | I0116 02:22:52.863326  991741 retry.go:31] will retry after 1.398410437s: waiting for machine to come up
	I0116 02:22:54.264422  991718 main.go:141] libmachine: (multinode-835787) DBG | domain multinode-835787 has defined MAC address 52:54:00:20:87:3c in network mk-multinode-835787
	I0116 02:22:54.265413  991718 main.go:141] libmachine: (multinode-835787) DBG | unable to find current IP address of domain multinode-835787 in network mk-multinode-835787
	I0116 02:22:54.265444  991718 main.go:141] libmachine: (multinode-835787) DBG | I0116 02:22:54.265358  991741 retry.go:31] will retry after 1.604757162s: waiting for machine to come up
	I0116 02:22:55.872226  991718 main.go:141] libmachine: (multinode-835787) DBG | domain multinode-835787 has defined MAC address 52:54:00:20:87:3c in network mk-multinode-835787
	I0116 02:22:55.872653  991718 main.go:141] libmachine: (multinode-835787) DBG | unable to find current IP address of domain multinode-835787 in network mk-multinode-835787
	I0116 02:22:55.872683  991718 main.go:141] libmachine: (multinode-835787) DBG | I0116 02:22:55.872598  991741 retry.go:31] will retry after 2.134326867s: waiting for machine to come up
	I0116 02:22:58.009045  991718 main.go:141] libmachine: (multinode-835787) DBG | domain multinode-835787 has defined MAC address 52:54:00:20:87:3c in network mk-multinode-835787
	I0116 02:22:58.009521  991718 main.go:141] libmachine: (multinode-835787) DBG | unable to find current IP address of domain multinode-835787 in network mk-multinode-835787
	I0116 02:22:58.009555  991718 main.go:141] libmachine: (multinode-835787) DBG | I0116 02:22:58.009457  991741 retry.go:31] will retry after 2.806685253s: waiting for machine to come up
	I0116 02:23:00.819007  991718 main.go:141] libmachine: (multinode-835787) DBG | domain multinode-835787 has defined MAC address 52:54:00:20:87:3c in network mk-multinode-835787
	I0116 02:23:00.819443  991718 main.go:141] libmachine: (multinode-835787) DBG | unable to find current IP address of domain multinode-835787 in network mk-multinode-835787
	I0116 02:23:00.819476  991718 main.go:141] libmachine: (multinode-835787) DBG | I0116 02:23:00.819389  991741 retry.go:31] will retry after 3.575279275s: waiting for machine to come up
	I0116 02:23:04.397659  991718 main.go:141] libmachine: (multinode-835787) DBG | domain multinode-835787 has defined MAC address 52:54:00:20:87:3c in network mk-multinode-835787
	I0116 02:23:04.398142  991718 main.go:141] libmachine: (multinode-835787) DBG | unable to find current IP address of domain multinode-835787 in network mk-multinode-835787
	I0116 02:23:04.398194  991718 main.go:141] libmachine: (multinode-835787) DBG | I0116 02:23:04.398101  991741 retry.go:31] will retry after 2.799379968s: waiting for machine to come up
	I0116 02:23:07.201155  991718 main.go:141] libmachine: (multinode-835787) DBG | domain multinode-835787 has defined MAC address 52:54:00:20:87:3c in network mk-multinode-835787
	I0116 02:23:07.201639  991718 main.go:141] libmachine: (multinode-835787) DBG | unable to find current IP address of domain multinode-835787 in network mk-multinode-835787
	I0116 02:23:07.201670  991718 main.go:141] libmachine: (multinode-835787) DBG | I0116 02:23:07.201591  991741 retry.go:31] will retry after 3.763701675s: waiting for machine to come up
	I0116 02:23:10.967373  991718 main.go:141] libmachine: (multinode-835787) DBG | domain multinode-835787 has defined MAC address 52:54:00:20:87:3c in network mk-multinode-835787
	I0116 02:23:10.967848  991718 main.go:141] libmachine: (multinode-835787) Found IP for machine: 192.168.39.50
	I0116 02:23:10.967871  991718 main.go:141] libmachine: (multinode-835787) Reserving static IP address...
	I0116 02:23:10.967886  991718 main.go:141] libmachine: (multinode-835787) DBG | domain multinode-835787 has current primary IP address 192.168.39.50 and MAC address 52:54:00:20:87:3c in network mk-multinode-835787
	I0116 02:23:10.968279  991718 main.go:141] libmachine: (multinode-835787) DBG | unable to find host DHCP lease matching {name: "multinode-835787", mac: "52:54:00:20:87:3c", ip: "192.168.39.50"} in network mk-multinode-835787
	I0116 02:23:11.046982  991718 main.go:141] libmachine: (multinode-835787) DBG | Getting to WaitForSSH function...
	I0116 02:23:11.047019  991718 main.go:141] libmachine: (multinode-835787) Reserved static IP address: 192.168.39.50
	I0116 02:23:11.047036  991718 main.go:141] libmachine: (multinode-835787) Waiting for SSH to be available...
	I0116 02:23:11.049327  991718 main.go:141] libmachine: (multinode-835787) DBG | domain multinode-835787 has defined MAC address 52:54:00:20:87:3c in network mk-multinode-835787
	I0116 02:23:11.049695  991718 main.go:141] libmachine: (multinode-835787) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:87:3c", ip: ""} in network mk-multinode-835787: {Iface:virbr1 ExpiryTime:2024-01-16 03:23:03 +0000 UTC Type:0 Mac:52:54:00:20:87:3c Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:minikube Clientid:01:52:54:00:20:87:3c}
	I0116 02:23:11.049724  991718 main.go:141] libmachine: (multinode-835787) DBG | domain multinode-835787 has defined IP address 192.168.39.50 and MAC address 52:54:00:20:87:3c in network mk-multinode-835787
	I0116 02:23:11.049897  991718 main.go:141] libmachine: (multinode-835787) DBG | Using SSH client type: external
	I0116 02:23:11.049933  991718 main.go:141] libmachine: (multinode-835787) DBG | Using SSH private key: /home/jenkins/minikube-integration/17967-971255/.minikube/machines/multinode-835787/id_rsa (-rw-------)
	I0116 02:23:11.049976  991718 main.go:141] libmachine: (multinode-835787) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.50 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17967-971255/.minikube/machines/multinode-835787/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0116 02:23:11.049990  991718 main.go:141] libmachine: (multinode-835787) DBG | About to run SSH command:
	I0116 02:23:11.050048  991718 main.go:141] libmachine: (multinode-835787) DBG | exit 0
	I0116 02:23:11.142101  991718 main.go:141] libmachine: (multinode-835787) DBG | SSH cmd err, output: <nil>: 
	I0116 02:23:11.142380  991718 main.go:141] libmachine: (multinode-835787) KVM machine creation complete!
	I0116 02:23:11.142747  991718 main.go:141] libmachine: (multinode-835787) Calling .GetConfigRaw
	I0116 02:23:11.143315  991718 main.go:141] libmachine: (multinode-835787) Calling .DriverName
	I0116 02:23:11.143511  991718 main.go:141] libmachine: (multinode-835787) Calling .DriverName
	I0116 02:23:11.143689  991718 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0116 02:23:11.143713  991718 main.go:141] libmachine: (multinode-835787) Calling .GetState
	I0116 02:23:11.144976  991718 main.go:141] libmachine: Detecting operating system of created instance...
	I0116 02:23:11.144992  991718 main.go:141] libmachine: Waiting for SSH to be available...
	I0116 02:23:11.144998  991718 main.go:141] libmachine: Getting to WaitForSSH function...
	I0116 02:23:11.145005  991718 main.go:141] libmachine: (multinode-835787) Calling .GetSSHHostname
	I0116 02:23:11.147396  991718 main.go:141] libmachine: (multinode-835787) DBG | domain multinode-835787 has defined MAC address 52:54:00:20:87:3c in network mk-multinode-835787
	I0116 02:23:11.147708  991718 main.go:141] libmachine: (multinode-835787) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:87:3c", ip: ""} in network mk-multinode-835787: {Iface:virbr1 ExpiryTime:2024-01-16 03:23:03 +0000 UTC Type:0 Mac:52:54:00:20:87:3c Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:multinode-835787 Clientid:01:52:54:00:20:87:3c}
	I0116 02:23:11.147749  991718 main.go:141] libmachine: (multinode-835787) DBG | domain multinode-835787 has defined IP address 192.168.39.50 and MAC address 52:54:00:20:87:3c in network mk-multinode-835787
	I0116 02:23:11.147907  991718 main.go:141] libmachine: (multinode-835787) Calling .GetSSHPort
	I0116 02:23:11.148131  991718 main.go:141] libmachine: (multinode-835787) Calling .GetSSHKeyPath
	I0116 02:23:11.148305  991718 main.go:141] libmachine: (multinode-835787) Calling .GetSSHKeyPath
	I0116 02:23:11.148524  991718 main.go:141] libmachine: (multinode-835787) Calling .GetSSHUsername
	I0116 02:23:11.148744  991718 main.go:141] libmachine: Using SSH client type: native
	I0116 02:23:11.149150  991718 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.50 22 <nil> <nil>}
	I0116 02:23:11.149168  991718 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0116 02:23:11.269473  991718 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0116 02:23:11.269514  991718 main.go:141] libmachine: Detecting the provisioner...
	I0116 02:23:11.269527  991718 main.go:141] libmachine: (multinode-835787) Calling .GetSSHHostname
	I0116 02:23:11.272673  991718 main.go:141] libmachine: (multinode-835787) DBG | domain multinode-835787 has defined MAC address 52:54:00:20:87:3c in network mk-multinode-835787
	I0116 02:23:11.272957  991718 main.go:141] libmachine: (multinode-835787) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:87:3c", ip: ""} in network mk-multinode-835787: {Iface:virbr1 ExpiryTime:2024-01-16 03:23:03 +0000 UTC Type:0 Mac:52:54:00:20:87:3c Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:multinode-835787 Clientid:01:52:54:00:20:87:3c}
	I0116 02:23:11.272985  991718 main.go:141] libmachine: (multinode-835787) DBG | domain multinode-835787 has defined IP address 192.168.39.50 and MAC address 52:54:00:20:87:3c in network mk-multinode-835787
	I0116 02:23:11.273173  991718 main.go:141] libmachine: (multinode-835787) Calling .GetSSHPort
	I0116 02:23:11.273433  991718 main.go:141] libmachine: (multinode-835787) Calling .GetSSHKeyPath
	I0116 02:23:11.273627  991718 main.go:141] libmachine: (multinode-835787) Calling .GetSSHKeyPath
	I0116 02:23:11.273779  991718 main.go:141] libmachine: (multinode-835787) Calling .GetSSHUsername
	I0116 02:23:11.273973  991718 main.go:141] libmachine: Using SSH client type: native
	I0116 02:23:11.274307  991718 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.50 22 <nil> <nil>}
	I0116 02:23:11.274319  991718 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0116 02:23:11.395344  991718 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g19d536a-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I0116 02:23:11.395449  991718 main.go:141] libmachine: found compatible host: buildroot
	I0116 02:23:11.395467  991718 main.go:141] libmachine: Provisioning with buildroot...
	I0116 02:23:11.395475  991718 main.go:141] libmachine: (multinode-835787) Calling .GetMachineName
	I0116 02:23:11.395765  991718 buildroot.go:166] provisioning hostname "multinode-835787"
	I0116 02:23:11.395793  991718 main.go:141] libmachine: (multinode-835787) Calling .GetMachineName
	I0116 02:23:11.395976  991718 main.go:141] libmachine: (multinode-835787) Calling .GetSSHHostname
	I0116 02:23:11.398663  991718 main.go:141] libmachine: (multinode-835787) DBG | domain multinode-835787 has defined MAC address 52:54:00:20:87:3c in network mk-multinode-835787
	I0116 02:23:11.399090  991718 main.go:141] libmachine: (multinode-835787) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:87:3c", ip: ""} in network mk-multinode-835787: {Iface:virbr1 ExpiryTime:2024-01-16 03:23:03 +0000 UTC Type:0 Mac:52:54:00:20:87:3c Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:multinode-835787 Clientid:01:52:54:00:20:87:3c}
	I0116 02:23:11.399122  991718 main.go:141] libmachine: (multinode-835787) DBG | domain multinode-835787 has defined IP address 192.168.39.50 and MAC address 52:54:00:20:87:3c in network mk-multinode-835787
	I0116 02:23:11.399291  991718 main.go:141] libmachine: (multinode-835787) Calling .GetSSHPort
	I0116 02:23:11.399490  991718 main.go:141] libmachine: (multinode-835787) Calling .GetSSHKeyPath
	I0116 02:23:11.399666  991718 main.go:141] libmachine: (multinode-835787) Calling .GetSSHKeyPath
	I0116 02:23:11.399807  991718 main.go:141] libmachine: (multinode-835787) Calling .GetSSHUsername
	I0116 02:23:11.400006  991718 main.go:141] libmachine: Using SSH client type: native
	I0116 02:23:11.400373  991718 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.50 22 <nil> <nil>}
	I0116 02:23:11.400393  991718 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-835787 && echo "multinode-835787" | sudo tee /etc/hostname
	I0116 02:23:11.538416  991718 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-835787
	
	I0116 02:23:11.538464  991718 main.go:141] libmachine: (multinode-835787) Calling .GetSSHHostname
	I0116 02:23:11.541292  991718 main.go:141] libmachine: (multinode-835787) DBG | domain multinode-835787 has defined MAC address 52:54:00:20:87:3c in network mk-multinode-835787
	I0116 02:23:11.541669  991718 main.go:141] libmachine: (multinode-835787) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:87:3c", ip: ""} in network mk-multinode-835787: {Iface:virbr1 ExpiryTime:2024-01-16 03:23:03 +0000 UTC Type:0 Mac:52:54:00:20:87:3c Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:multinode-835787 Clientid:01:52:54:00:20:87:3c}
	I0116 02:23:11.541696  991718 main.go:141] libmachine: (multinode-835787) DBG | domain multinode-835787 has defined IP address 192.168.39.50 and MAC address 52:54:00:20:87:3c in network mk-multinode-835787
	I0116 02:23:11.541901  991718 main.go:141] libmachine: (multinode-835787) Calling .GetSSHPort
	I0116 02:23:11.542125  991718 main.go:141] libmachine: (multinode-835787) Calling .GetSSHKeyPath
	I0116 02:23:11.542317  991718 main.go:141] libmachine: (multinode-835787) Calling .GetSSHKeyPath
	I0116 02:23:11.542459  991718 main.go:141] libmachine: (multinode-835787) Calling .GetSSHUsername
	I0116 02:23:11.542599  991718 main.go:141] libmachine: Using SSH client type: native
	I0116 02:23:11.542915  991718 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.50 22 <nil> <nil>}
	I0116 02:23:11.542937  991718 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-835787' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-835787/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-835787' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0116 02:23:11.670650  991718 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0116 02:23:11.670686  991718 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17967-971255/.minikube CaCertPath:/home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17967-971255/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17967-971255/.minikube}
	I0116 02:23:11.670721  991718 buildroot.go:174] setting up certificates
	I0116 02:23:11.670731  991718 provision.go:83] configureAuth start
	I0116 02:23:11.670743  991718 main.go:141] libmachine: (multinode-835787) Calling .GetMachineName
	I0116 02:23:11.671066  991718 main.go:141] libmachine: (multinode-835787) Calling .GetIP
	I0116 02:23:11.673612  991718 main.go:141] libmachine: (multinode-835787) DBG | domain multinode-835787 has defined MAC address 52:54:00:20:87:3c in network mk-multinode-835787
	I0116 02:23:11.674051  991718 main.go:141] libmachine: (multinode-835787) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:87:3c", ip: ""} in network mk-multinode-835787: {Iface:virbr1 ExpiryTime:2024-01-16 03:23:03 +0000 UTC Type:0 Mac:52:54:00:20:87:3c Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:multinode-835787 Clientid:01:52:54:00:20:87:3c}
	I0116 02:23:11.674084  991718 main.go:141] libmachine: (multinode-835787) DBG | domain multinode-835787 has defined IP address 192.168.39.50 and MAC address 52:54:00:20:87:3c in network mk-multinode-835787
	I0116 02:23:11.674291  991718 main.go:141] libmachine: (multinode-835787) Calling .GetSSHHostname
	I0116 02:23:11.676669  991718 main.go:141] libmachine: (multinode-835787) DBG | domain multinode-835787 has defined MAC address 52:54:00:20:87:3c in network mk-multinode-835787
	I0116 02:23:11.677049  991718 main.go:141] libmachine: (multinode-835787) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:87:3c", ip: ""} in network mk-multinode-835787: {Iface:virbr1 ExpiryTime:2024-01-16 03:23:03 +0000 UTC Type:0 Mac:52:54:00:20:87:3c Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:multinode-835787 Clientid:01:52:54:00:20:87:3c}
	I0116 02:23:11.677078  991718 main.go:141] libmachine: (multinode-835787) DBG | domain multinode-835787 has defined IP address 192.168.39.50 and MAC address 52:54:00:20:87:3c in network mk-multinode-835787
	I0116 02:23:11.677290  991718 provision.go:138] copyHostCerts
	I0116 02:23:11.677327  991718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17967-971255/.minikube/ca.pem
	I0116 02:23:11.677365  991718 exec_runner.go:144] found /home/jenkins/minikube-integration/17967-971255/.minikube/ca.pem, removing ...
	I0116 02:23:11.677378  991718 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17967-971255/.minikube/ca.pem
	I0116 02:23:11.677450  991718 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17967-971255/.minikube/ca.pem (1082 bytes)
	I0116 02:23:11.677613  991718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17967-971255/.minikube/cert.pem
	I0116 02:23:11.677647  991718 exec_runner.go:144] found /home/jenkins/minikube-integration/17967-971255/.minikube/cert.pem, removing ...
	I0116 02:23:11.677663  991718 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17967-971255/.minikube/cert.pem
	I0116 02:23:11.677700  991718 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17967-971255/.minikube/cert.pem (1123 bytes)
	I0116 02:23:11.677764  991718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17967-971255/.minikube/key.pem
	I0116 02:23:11.677787  991718 exec_runner.go:144] found /home/jenkins/minikube-integration/17967-971255/.minikube/key.pem, removing ...
	I0116 02:23:11.677796  991718 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17967-971255/.minikube/key.pem
	I0116 02:23:11.677835  991718 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17967-971255/.minikube/key.pem (1675 bytes)
	I0116 02:23:11.677895  991718 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17967-971255/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca-key.pem org=jenkins.multinode-835787 san=[192.168.39.50 192.168.39.50 localhost 127.0.0.1 minikube multinode-835787]
	I0116 02:23:11.770735  991718 provision.go:172] copyRemoteCerts
	I0116 02:23:11.770821  991718 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0116 02:23:11.770858  991718 main.go:141] libmachine: (multinode-835787) Calling .GetSSHHostname
	I0116 02:23:11.773956  991718 main.go:141] libmachine: (multinode-835787) DBG | domain multinode-835787 has defined MAC address 52:54:00:20:87:3c in network mk-multinode-835787
	I0116 02:23:11.774448  991718 main.go:141] libmachine: (multinode-835787) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:87:3c", ip: ""} in network mk-multinode-835787: {Iface:virbr1 ExpiryTime:2024-01-16 03:23:03 +0000 UTC Type:0 Mac:52:54:00:20:87:3c Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:multinode-835787 Clientid:01:52:54:00:20:87:3c}
	I0116 02:23:11.774478  991718 main.go:141] libmachine: (multinode-835787) DBG | domain multinode-835787 has defined IP address 192.168.39.50 and MAC address 52:54:00:20:87:3c in network mk-multinode-835787
	I0116 02:23:11.774728  991718 main.go:141] libmachine: (multinode-835787) Calling .GetSSHPort
	I0116 02:23:11.774993  991718 main.go:141] libmachine: (multinode-835787) Calling .GetSSHKeyPath
	I0116 02:23:11.775142  991718 main.go:141] libmachine: (multinode-835787) Calling .GetSSHUsername
	I0116 02:23:11.775296  991718 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/multinode-835787/id_rsa Username:docker}
	I0116 02:23:11.864078  991718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-971255/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0116 02:23:11.864177  991718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0116 02:23:11.891846  991718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0116 02:23:11.891928  991718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0116 02:23:11.919041  991718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-971255/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0116 02:23:11.919127  991718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0116 02:23:11.944006  991718 provision.go:86] duration metric: configureAuth took 273.256048ms
	I0116 02:23:11.944042  991718 buildroot.go:189] setting minikube options for container-runtime
	I0116 02:23:11.944267  991718 config.go:182] Loaded profile config "multinode-835787": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 02:23:11.944369  991718 main.go:141] libmachine: (multinode-835787) Calling .GetSSHHostname
	I0116 02:23:11.947251  991718 main.go:141] libmachine: (multinode-835787) DBG | domain multinode-835787 has defined MAC address 52:54:00:20:87:3c in network mk-multinode-835787
	I0116 02:23:11.947612  991718 main.go:141] libmachine: (multinode-835787) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:87:3c", ip: ""} in network mk-multinode-835787: {Iface:virbr1 ExpiryTime:2024-01-16 03:23:03 +0000 UTC Type:0 Mac:52:54:00:20:87:3c Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:multinode-835787 Clientid:01:52:54:00:20:87:3c}
	I0116 02:23:11.947645  991718 main.go:141] libmachine: (multinode-835787) DBG | domain multinode-835787 has defined IP address 192.168.39.50 and MAC address 52:54:00:20:87:3c in network mk-multinode-835787
	I0116 02:23:11.947848  991718 main.go:141] libmachine: (multinode-835787) Calling .GetSSHPort
	I0116 02:23:11.948038  991718 main.go:141] libmachine: (multinode-835787) Calling .GetSSHKeyPath
	I0116 02:23:11.948246  991718 main.go:141] libmachine: (multinode-835787) Calling .GetSSHKeyPath
	I0116 02:23:11.948422  991718 main.go:141] libmachine: (multinode-835787) Calling .GetSSHUsername
	I0116 02:23:11.948611  991718 main.go:141] libmachine: Using SSH client type: native
	I0116 02:23:11.949127  991718 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.50 22 <nil> <nil>}
	I0116 02:23:11.949149  991718 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0116 02:23:12.275443  991718 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0116 02:23:12.275512  991718 main.go:141] libmachine: Checking connection to Docker...
	I0116 02:23:12.275524  991718 main.go:141] libmachine: (multinode-835787) Calling .GetURL
	I0116 02:23:12.276912  991718 main.go:141] libmachine: (multinode-835787) DBG | Using libvirt version 6000000
	I0116 02:23:12.278830  991718 main.go:141] libmachine: (multinode-835787) DBG | domain multinode-835787 has defined MAC address 52:54:00:20:87:3c in network mk-multinode-835787
	I0116 02:23:12.279116  991718 main.go:141] libmachine: (multinode-835787) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:87:3c", ip: ""} in network mk-multinode-835787: {Iface:virbr1 ExpiryTime:2024-01-16 03:23:03 +0000 UTC Type:0 Mac:52:54:00:20:87:3c Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:multinode-835787 Clientid:01:52:54:00:20:87:3c}
	I0116 02:23:12.279145  991718 main.go:141] libmachine: (multinode-835787) DBG | domain multinode-835787 has defined IP address 192.168.39.50 and MAC address 52:54:00:20:87:3c in network mk-multinode-835787
	I0116 02:23:12.279396  991718 main.go:141] libmachine: Docker is up and running!
	I0116 02:23:12.279418  991718 main.go:141] libmachine: Reticulating splines...
	I0116 02:23:12.279426  991718 client.go:171] LocalClient.Create took 24.865112748s
	I0116 02:23:12.279453  991718 start.go:167] duration metric: libmachine.API.Create for "multinode-835787" took 24.865229515s
	I0116 02:23:12.279464  991718 start.go:300] post-start starting for "multinode-835787" (driver="kvm2")
	I0116 02:23:12.279475  991718 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0116 02:23:12.279493  991718 main.go:141] libmachine: (multinode-835787) Calling .DriverName
	I0116 02:23:12.279760  991718 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0116 02:23:12.279807  991718 main.go:141] libmachine: (multinode-835787) Calling .GetSSHHostname
	I0116 02:23:12.281792  991718 main.go:141] libmachine: (multinode-835787) DBG | domain multinode-835787 has defined MAC address 52:54:00:20:87:3c in network mk-multinode-835787
	I0116 02:23:12.282076  991718 main.go:141] libmachine: (multinode-835787) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:87:3c", ip: ""} in network mk-multinode-835787: {Iface:virbr1 ExpiryTime:2024-01-16 03:23:03 +0000 UTC Type:0 Mac:52:54:00:20:87:3c Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:multinode-835787 Clientid:01:52:54:00:20:87:3c}
	I0116 02:23:12.282106  991718 main.go:141] libmachine: (multinode-835787) DBG | domain multinode-835787 has defined IP address 192.168.39.50 and MAC address 52:54:00:20:87:3c in network mk-multinode-835787
	I0116 02:23:12.282210  991718 main.go:141] libmachine: (multinode-835787) Calling .GetSSHPort
	I0116 02:23:12.282418  991718 main.go:141] libmachine: (multinode-835787) Calling .GetSSHKeyPath
	I0116 02:23:12.282553  991718 main.go:141] libmachine: (multinode-835787) Calling .GetSSHUsername
	I0116 02:23:12.282684  991718 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/multinode-835787/id_rsa Username:docker}
	I0116 02:23:12.373256  991718 ssh_runner.go:195] Run: cat /etc/os-release
	I0116 02:23:12.377713  991718 command_runner.go:130] > NAME=Buildroot
	I0116 02:23:12.377734  991718 command_runner.go:130] > VERSION=2021.02.12-1-g19d536a-dirty
	I0116 02:23:12.377739  991718 command_runner.go:130] > ID=buildroot
	I0116 02:23:12.377745  991718 command_runner.go:130] > VERSION_ID=2021.02.12
	I0116 02:23:12.377752  991718 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0116 02:23:12.377952  991718 info.go:137] Remote host: Buildroot 2021.02.12
	I0116 02:23:12.377978  991718 filesync.go:126] Scanning /home/jenkins/minikube-integration/17967-971255/.minikube/addons for local assets ...
	I0116 02:23:12.378041  991718 filesync.go:126] Scanning /home/jenkins/minikube-integration/17967-971255/.minikube/files for local assets ...
	I0116 02:23:12.378143  991718 filesync.go:149] local asset: /home/jenkins/minikube-integration/17967-971255/.minikube/files/etc/ssl/certs/9784822.pem -> 9784822.pem in /etc/ssl/certs
	I0116 02:23:12.378156  991718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-971255/.minikube/files/etc/ssl/certs/9784822.pem -> /etc/ssl/certs/9784822.pem
	I0116 02:23:12.378263  991718 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0116 02:23:12.388888  991718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/files/etc/ssl/certs/9784822.pem --> /etc/ssl/certs/9784822.pem (1708 bytes)
	I0116 02:23:12.413642  991718 start.go:303] post-start completed in 134.158816ms
	I0116 02:23:12.413700  991718 main.go:141] libmachine: (multinode-835787) Calling .GetConfigRaw
	I0116 02:23:12.414362  991718 main.go:141] libmachine: (multinode-835787) Calling .GetIP
	I0116 02:23:12.416915  991718 main.go:141] libmachine: (multinode-835787) DBG | domain multinode-835787 has defined MAC address 52:54:00:20:87:3c in network mk-multinode-835787
	I0116 02:23:12.417252  991718 main.go:141] libmachine: (multinode-835787) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:87:3c", ip: ""} in network mk-multinode-835787: {Iface:virbr1 ExpiryTime:2024-01-16 03:23:03 +0000 UTC Type:0 Mac:52:54:00:20:87:3c Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:multinode-835787 Clientid:01:52:54:00:20:87:3c}
	I0116 02:23:12.417286  991718 main.go:141] libmachine: (multinode-835787) DBG | domain multinode-835787 has defined IP address 192.168.39.50 and MAC address 52:54:00:20:87:3c in network mk-multinode-835787
	I0116 02:23:12.417576  991718 profile.go:148] Saving config to /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/multinode-835787/config.json ...
	I0116 02:23:12.417775  991718 start.go:128] duration metric: createHost completed in 25.023448678s
	I0116 02:23:12.417821  991718 main.go:141] libmachine: (multinode-835787) Calling .GetSSHHostname
	I0116 02:23:12.419918  991718 main.go:141] libmachine: (multinode-835787) DBG | domain multinode-835787 has defined MAC address 52:54:00:20:87:3c in network mk-multinode-835787
	I0116 02:23:12.420243  991718 main.go:141] libmachine: (multinode-835787) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:87:3c", ip: ""} in network mk-multinode-835787: {Iface:virbr1 ExpiryTime:2024-01-16 03:23:03 +0000 UTC Type:0 Mac:52:54:00:20:87:3c Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:multinode-835787 Clientid:01:52:54:00:20:87:3c}
	I0116 02:23:12.420274  991718 main.go:141] libmachine: (multinode-835787) DBG | domain multinode-835787 has defined IP address 192.168.39.50 and MAC address 52:54:00:20:87:3c in network mk-multinode-835787
	I0116 02:23:12.420410  991718 main.go:141] libmachine: (multinode-835787) Calling .GetSSHPort
	I0116 02:23:12.420604  991718 main.go:141] libmachine: (multinode-835787) Calling .GetSSHKeyPath
	I0116 02:23:12.420759  991718 main.go:141] libmachine: (multinode-835787) Calling .GetSSHKeyPath
	I0116 02:23:12.420900  991718 main.go:141] libmachine: (multinode-835787) Calling .GetSSHUsername
	I0116 02:23:12.421067  991718 main.go:141] libmachine: Using SSH client type: native
	I0116 02:23:12.421516  991718 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.50 22 <nil> <nil>}
	I0116 02:23:12.421532  991718 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0116 02:23:12.542687  991718 main.go:141] libmachine: SSH cmd err, output: <nil>: 1705371792.524874676
	
	I0116 02:23:12.542715  991718 fix.go:206] guest clock: 1705371792.524874676
	I0116 02:23:12.542723  991718 fix.go:219] Guest: 2024-01-16 02:23:12.524874676 +0000 UTC Remote: 2024-01-16 02:23:12.417788001 +0000 UTC m=+25.155422552 (delta=107.086675ms)
	I0116 02:23:12.542741  991718 fix.go:190] guest clock delta is within tolerance: 107.086675ms
	I0116 02:23:12.542747  991718 start.go:83] releasing machines lock for "multinode-835787", held for 25.14854932s
	I0116 02:23:12.542766  991718 main.go:141] libmachine: (multinode-835787) Calling .DriverName
	I0116 02:23:12.543069  991718 main.go:141] libmachine: (multinode-835787) Calling .GetIP
	I0116 02:23:12.545582  991718 main.go:141] libmachine: (multinode-835787) DBG | domain multinode-835787 has defined MAC address 52:54:00:20:87:3c in network mk-multinode-835787
	I0116 02:23:12.545921  991718 main.go:141] libmachine: (multinode-835787) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:87:3c", ip: ""} in network mk-multinode-835787: {Iface:virbr1 ExpiryTime:2024-01-16 03:23:03 +0000 UTC Type:0 Mac:52:54:00:20:87:3c Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:multinode-835787 Clientid:01:52:54:00:20:87:3c}
	I0116 02:23:12.545982  991718 main.go:141] libmachine: (multinode-835787) DBG | domain multinode-835787 has defined IP address 192.168.39.50 and MAC address 52:54:00:20:87:3c in network mk-multinode-835787
	I0116 02:23:12.546110  991718 main.go:141] libmachine: (multinode-835787) Calling .DriverName
	I0116 02:23:12.546689  991718 main.go:141] libmachine: (multinode-835787) Calling .DriverName
	I0116 02:23:12.546839  991718 main.go:141] libmachine: (multinode-835787) Calling .DriverName
	I0116 02:23:12.546949  991718 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0116 02:23:12.546999  991718 main.go:141] libmachine: (multinode-835787) Calling .GetSSHHostname
	I0116 02:23:12.547069  991718 ssh_runner.go:195] Run: cat /version.json
	I0116 02:23:12.547095  991718 main.go:141] libmachine: (multinode-835787) Calling .GetSSHHostname
	I0116 02:23:12.549640  991718 main.go:141] libmachine: (multinode-835787) DBG | domain multinode-835787 has defined MAC address 52:54:00:20:87:3c in network mk-multinode-835787
	I0116 02:23:12.549889  991718 main.go:141] libmachine: (multinode-835787) DBG | domain multinode-835787 has defined MAC address 52:54:00:20:87:3c in network mk-multinode-835787
	I0116 02:23:12.549973  991718 main.go:141] libmachine: (multinode-835787) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:87:3c", ip: ""} in network mk-multinode-835787: {Iface:virbr1 ExpiryTime:2024-01-16 03:23:03 +0000 UTC Type:0 Mac:52:54:00:20:87:3c Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:multinode-835787 Clientid:01:52:54:00:20:87:3c}
	I0116 02:23:12.550005  991718 main.go:141] libmachine: (multinode-835787) DBG | domain multinode-835787 has defined IP address 192.168.39.50 and MAC address 52:54:00:20:87:3c in network mk-multinode-835787
	I0116 02:23:12.550107  991718 main.go:141] libmachine: (multinode-835787) Calling .GetSSHPort
	I0116 02:23:12.550283  991718 main.go:141] libmachine: (multinode-835787) Calling .GetSSHKeyPath
	I0116 02:23:12.550333  991718 main.go:141] libmachine: (multinode-835787) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:87:3c", ip: ""} in network mk-multinode-835787: {Iface:virbr1 ExpiryTime:2024-01-16 03:23:03 +0000 UTC Type:0 Mac:52:54:00:20:87:3c Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:multinode-835787 Clientid:01:52:54:00:20:87:3c}
	I0116 02:23:12.550359  991718 main.go:141] libmachine: (multinode-835787) DBG | domain multinode-835787 has defined IP address 192.168.39.50 and MAC address 52:54:00:20:87:3c in network mk-multinode-835787
	I0116 02:23:12.550448  991718 main.go:141] libmachine: (multinode-835787) Calling .GetSSHUsername
	I0116 02:23:12.550534  991718 main.go:141] libmachine: (multinode-835787) Calling .GetSSHPort
	I0116 02:23:12.550604  991718 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/multinode-835787/id_rsa Username:docker}
	I0116 02:23:12.550659  991718 main.go:141] libmachine: (multinode-835787) Calling .GetSSHKeyPath
	I0116 02:23:12.550789  991718 main.go:141] libmachine: (multinode-835787) Calling .GetSSHUsername
	I0116 02:23:12.550953  991718 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/multinode-835787/id_rsa Username:docker}
	I0116 02:23:12.665148  991718 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0116 02:23:12.665999  991718 command_runner.go:130] > {"iso_version": "v1.32.1-1703784139-17866", "kicbase_version": "v0.0.42-1703723663-17866", "minikube_version": "v1.32.0", "commit": "eb69424d8f623d7cabea57d4395ce87adf1b5fc3"}
	I0116 02:23:12.666181  991718 ssh_runner.go:195] Run: systemctl --version
	I0116 02:23:12.671696  991718 command_runner.go:130] > systemd 247 (247)
	I0116 02:23:12.671738  991718 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I0116 02:23:12.672074  991718 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0116 02:23:12.835740  991718 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0116 02:23:12.841748  991718 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0116 02:23:12.841825  991718 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0116 02:23:12.841911  991718 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0116 02:23:12.856945  991718 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0116 02:23:12.857627  991718 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0116 02:23:12.857668  991718 start.go:475] detecting cgroup driver to use...
	I0116 02:23:12.857787  991718 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0116 02:23:12.872945  991718 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0116 02:23:12.885627  991718 docker.go:217] disabling cri-docker service (if available) ...
	I0116 02:23:12.885722  991718 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0116 02:23:12.898790  991718 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0116 02:23:12.911680  991718 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0116 02:23:12.925339  991718 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/cri-docker.socket.
	I0116 02:23:13.023688  991718 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0116 02:23:13.037840  991718 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I0116 02:23:13.145547  991718 docker.go:233] disabling docker service ...
	I0116 02:23:13.145638  991718 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0116 02:23:13.159624  991718 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0116 02:23:13.172257  991718 command_runner.go:130] ! Failed to stop docker.service: Unit docker.service not loaded.
	I0116 02:23:13.172405  991718 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0116 02:23:13.186888  991718 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0116 02:23:13.276990  991718 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0116 02:23:13.378584  991718 command_runner.go:130] ! Unit docker.service does not exist, proceeding anyway.
	I0116 02:23:13.378614  991718 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0116 02:23:13.378696  991718 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0116 02:23:13.391181  991718 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0116 02:23:13.408408  991718 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0116 02:23:13.408452  991718 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0116 02:23:13.408500  991718 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 02:23:13.417839  991718 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0116 02:23:13.417916  991718 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 02:23:13.427100  991718 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 02:23:13.436164  991718 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 02:23:13.445186  991718 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0116 02:23:13.454711  991718 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0116 02:23:13.462891  991718 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0116 02:23:13.462952  991718 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0116 02:23:13.463000  991718 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0116 02:23:13.475982  991718 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0116 02:23:13.484585  991718 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0116 02:23:13.605779  991718 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0116 02:23:13.775553  991718 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0116 02:23:13.775662  991718 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0116 02:23:13.784960  991718 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0116 02:23:13.784995  991718 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0116 02:23:13.785006  991718 command_runner.go:130] > Device: 16h/22d	Inode: 783         Links: 1
	I0116 02:23:13.785016  991718 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0116 02:23:13.785024  991718 command_runner.go:130] > Access: 2024-01-16 02:23:13.746696073 +0000
	I0116 02:23:13.785042  991718 command_runner.go:130] > Modify: 2024-01-16 02:23:13.746696073 +0000
	I0116 02:23:13.785052  991718 command_runner.go:130] > Change: 2024-01-16 02:23:13.746696073 +0000
	I0116 02:23:13.785058  991718 command_runner.go:130] >  Birth: -
	I0116 02:23:13.785086  991718 start.go:543] Will wait 60s for crictl version
	I0116 02:23:13.785142  991718 ssh_runner.go:195] Run: which crictl
	I0116 02:23:13.788980  991718 command_runner.go:130] > /usr/bin/crictl
	I0116 02:23:13.789097  991718 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0116 02:23:13.825039  991718 command_runner.go:130] > Version:  0.1.0
	I0116 02:23:13.825074  991718 command_runner.go:130] > RuntimeName:  cri-o
	I0116 02:23:13.825081  991718 command_runner.go:130] > RuntimeVersion:  1.24.1
	I0116 02:23:13.825089  991718 command_runner.go:130] > RuntimeApiVersion:  v1
	I0116 02:23:13.825122  991718 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0116 02:23:13.825219  991718 ssh_runner.go:195] Run: crio --version
	I0116 02:23:13.872048  991718 command_runner.go:130] > crio version 1.24.1
	I0116 02:23:13.872089  991718 command_runner.go:130] > Version:          1.24.1
	I0116 02:23:13.872097  991718 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0116 02:23:13.872101  991718 command_runner.go:130] > GitTreeState:     dirty
	I0116 02:23:13.872107  991718 command_runner.go:130] > BuildDate:        2023-12-28T22:46:29Z
	I0116 02:23:13.872112  991718 command_runner.go:130] > GoVersion:        go1.19.9
	I0116 02:23:13.872116  991718 command_runner.go:130] > Compiler:         gc
	I0116 02:23:13.872121  991718 command_runner.go:130] > Platform:         linux/amd64
	I0116 02:23:13.872141  991718 command_runner.go:130] > Linkmode:         dynamic
	I0116 02:23:13.872154  991718 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0116 02:23:13.872159  991718 command_runner.go:130] > SeccompEnabled:   true
	I0116 02:23:13.872163  991718 command_runner.go:130] > AppArmorEnabled:  false
	I0116 02:23:13.873335  991718 ssh_runner.go:195] Run: crio --version
	I0116 02:23:13.921939  991718 command_runner.go:130] > crio version 1.24.1
	I0116 02:23:13.921965  991718 command_runner.go:130] > Version:          1.24.1
	I0116 02:23:13.921991  991718 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0116 02:23:13.921996  991718 command_runner.go:130] > GitTreeState:     dirty
	I0116 02:23:13.922002  991718 command_runner.go:130] > BuildDate:        2023-12-28T22:46:29Z
	I0116 02:23:13.922007  991718 command_runner.go:130] > GoVersion:        go1.19.9
	I0116 02:23:13.922011  991718 command_runner.go:130] > Compiler:         gc
	I0116 02:23:13.922015  991718 command_runner.go:130] > Platform:         linux/amd64
	I0116 02:23:13.922022  991718 command_runner.go:130] > Linkmode:         dynamic
	I0116 02:23:13.922029  991718 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0116 02:23:13.922036  991718 command_runner.go:130] > SeccompEnabled:   true
	I0116 02:23:13.922041  991718 command_runner.go:130] > AppArmorEnabled:  false
	I0116 02:23:13.925246  991718 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0116 02:23:13.926522  991718 main.go:141] libmachine: (multinode-835787) Calling .GetIP
	I0116 02:23:13.929366  991718 main.go:141] libmachine: (multinode-835787) DBG | domain multinode-835787 has defined MAC address 52:54:00:20:87:3c in network mk-multinode-835787
	I0116 02:23:13.929823  991718 main.go:141] libmachine: (multinode-835787) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:87:3c", ip: ""} in network mk-multinode-835787: {Iface:virbr1 ExpiryTime:2024-01-16 03:23:03 +0000 UTC Type:0 Mac:52:54:00:20:87:3c Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:multinode-835787 Clientid:01:52:54:00:20:87:3c}
	I0116 02:23:13.929867  991718 main.go:141] libmachine: (multinode-835787) DBG | domain multinode-835787 has defined IP address 192.168.39.50 and MAC address 52:54:00:20:87:3c in network mk-multinode-835787
	I0116 02:23:13.930095  991718 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0116 02:23:13.934217  991718 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 02:23:13.946832  991718 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0116 02:23:13.946898  991718 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 02:23:13.981162  991718 command_runner.go:130] > {
	I0116 02:23:13.981191  991718 command_runner.go:130] >   "images": [
	I0116 02:23:13.981196  991718 command_runner.go:130] >   ]
	I0116 02:23:13.981200  991718 command_runner.go:130] > }
	I0116 02:23:13.981317  991718 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0116 02:23:13.981409  991718 ssh_runner.go:195] Run: which lz4
	I0116 02:23:13.985145  991718 command_runner.go:130] > /usr/bin/lz4
	I0116 02:23:13.985180  991718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-971255/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0116 02:23:13.985282  991718 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0116 02:23:13.989078  991718 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0116 02:23:13.989260  991718 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0116 02:23:13.989299  991718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0116 02:23:15.806603  991718 crio.go:444] Took 1.821350 seconds to copy over tarball
	I0116 02:23:15.806712  991718 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0116 02:23:19.016694  991718 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.209942936s)
	I0116 02:23:19.016734  991718 crio.go:451] Took 3.210092 seconds to extract the tarball
	I0116 02:23:19.016747  991718 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0116 02:23:19.061086  991718 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 02:23:19.135863  991718 command_runner.go:130] > {
	I0116 02:23:19.135893  991718 command_runner.go:130] >   "images": [
	I0116 02:23:19.135899  991718 command_runner.go:130] >     {
	I0116 02:23:19.135932  991718 command_runner.go:130] >       "id": "c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc",
	I0116 02:23:19.135941  991718 command_runner.go:130] >       "repoTags": [
	I0116 02:23:19.135950  991718 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I0116 02:23:19.135956  991718 command_runner.go:130] >       ],
	I0116 02:23:19.135963  991718 command_runner.go:130] >       "repoDigests": [
	I0116 02:23:19.135984  991718 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I0116 02:23:19.136002  991718 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"
	I0116 02:23:19.136011  991718 command_runner.go:130] >       ],
	I0116 02:23:19.136021  991718 command_runner.go:130] >       "size": "65258016",
	I0116 02:23:19.136029  991718 command_runner.go:130] >       "uid": null,
	I0116 02:23:19.136039  991718 command_runner.go:130] >       "username": "",
	I0116 02:23:19.136053  991718 command_runner.go:130] >       "spec": null,
	I0116 02:23:19.136062  991718 command_runner.go:130] >       "pinned": false
	I0116 02:23:19.136068  991718 command_runner.go:130] >     },
	I0116 02:23:19.136077  991718 command_runner.go:130] >     {
	I0116 02:23:19.136087  991718 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0116 02:23:19.136096  991718 command_runner.go:130] >       "repoTags": [
	I0116 02:23:19.136104  991718 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0116 02:23:19.136112  991718 command_runner.go:130] >       ],
	I0116 02:23:19.136120  991718 command_runner.go:130] >       "repoDigests": [
	I0116 02:23:19.136134  991718 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0116 02:23:19.136148  991718 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0116 02:23:19.136157  991718 command_runner.go:130] >       ],
	I0116 02:23:19.136177  991718 command_runner.go:130] >       "size": "31470524",
	I0116 02:23:19.136190  991718 command_runner.go:130] >       "uid": null,
	I0116 02:23:19.136201  991718 command_runner.go:130] >       "username": "",
	I0116 02:23:19.136208  991718 command_runner.go:130] >       "spec": null,
	I0116 02:23:19.136219  991718 command_runner.go:130] >       "pinned": false
	I0116 02:23:19.136225  991718 command_runner.go:130] >     },
	I0116 02:23:19.136233  991718 command_runner.go:130] >     {
	I0116 02:23:19.136243  991718 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I0116 02:23:19.136250  991718 command_runner.go:130] >       "repoTags": [
	I0116 02:23:19.136261  991718 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I0116 02:23:19.136270  991718 command_runner.go:130] >       ],
	I0116 02:23:19.136281  991718 command_runner.go:130] >       "repoDigests": [
	I0116 02:23:19.136294  991718 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I0116 02:23:19.136308  991718 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I0116 02:23:19.136318  991718 command_runner.go:130] >       ],
	I0116 02:23:19.136324  991718 command_runner.go:130] >       "size": "53621675",
	I0116 02:23:19.136334  991718 command_runner.go:130] >       "uid": null,
	I0116 02:23:19.136341  991718 command_runner.go:130] >       "username": "",
	I0116 02:23:19.136352  991718 command_runner.go:130] >       "spec": null,
	I0116 02:23:19.136366  991718 command_runner.go:130] >       "pinned": false
	I0116 02:23:19.136375  991718 command_runner.go:130] >     },
	I0116 02:23:19.136382  991718 command_runner.go:130] >     {
	I0116 02:23:19.136395  991718 command_runner.go:130] >       "id": "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9",
	I0116 02:23:19.136404  991718 command_runner.go:130] >       "repoTags": [
	I0116 02:23:19.136414  991718 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I0116 02:23:19.136423  991718 command_runner.go:130] >       ],
	I0116 02:23:19.136431  991718 command_runner.go:130] >       "repoDigests": [
	I0116 02:23:19.136446  991718 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15",
	I0116 02:23:19.136466  991718 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"
	I0116 02:23:19.136488  991718 command_runner.go:130] >       ],
	I0116 02:23:19.136497  991718 command_runner.go:130] >       "size": "295456551",
	I0116 02:23:19.136503  991718 command_runner.go:130] >       "uid": {
	I0116 02:23:19.136512  991718 command_runner.go:130] >         "value": "0"
	I0116 02:23:19.136517  991718 command_runner.go:130] >       },
	I0116 02:23:19.136524  991718 command_runner.go:130] >       "username": "",
	I0116 02:23:19.136530  991718 command_runner.go:130] >       "spec": null,
	I0116 02:23:19.136539  991718 command_runner.go:130] >       "pinned": false
	I0116 02:23:19.136550  991718 command_runner.go:130] >     },
	I0116 02:23:19.136558  991718 command_runner.go:130] >     {
	I0116 02:23:19.136568  991718 command_runner.go:130] >       "id": "7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257",
	I0116 02:23:19.136577  991718 command_runner.go:130] >       "repoTags": [
	I0116 02:23:19.136584  991718 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.4"
	I0116 02:23:19.136593  991718 command_runner.go:130] >       ],
	I0116 02:23:19.136599  991718 command_runner.go:130] >       "repoDigests": [
	I0116 02:23:19.136613  991718 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499",
	I0116 02:23:19.136627  991718 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"
	I0116 02:23:19.136636  991718 command_runner.go:130] >       ],
	I0116 02:23:19.136642  991718 command_runner.go:130] >       "size": "127226832",
	I0116 02:23:19.136651  991718 command_runner.go:130] >       "uid": {
	I0116 02:23:19.136657  991718 command_runner.go:130] >         "value": "0"
	I0116 02:23:19.136666  991718 command_runner.go:130] >       },
	I0116 02:23:19.136673  991718 command_runner.go:130] >       "username": "",
	I0116 02:23:19.136679  991718 command_runner.go:130] >       "spec": null,
	I0116 02:23:19.136689  991718 command_runner.go:130] >       "pinned": false
	I0116 02:23:19.136699  991718 command_runner.go:130] >     },
	I0116 02:23:19.136709  991718 command_runner.go:130] >     {
	I0116 02:23:19.136723  991718 command_runner.go:130] >       "id": "d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591",
	I0116 02:23:19.136732  991718 command_runner.go:130] >       "repoTags": [
	I0116 02:23:19.136742  991718 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.4"
	I0116 02:23:19.136751  991718 command_runner.go:130] >       ],
	I0116 02:23:19.136758  991718 command_runner.go:130] >       "repoDigests": [
	I0116 02:23:19.136773  991718 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c",
	I0116 02:23:19.136789  991718 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232"
	I0116 02:23:19.136798  991718 command_runner.go:130] >       ],
	I0116 02:23:19.136805  991718 command_runner.go:130] >       "size": "123261750",
	I0116 02:23:19.136814  991718 command_runner.go:130] >       "uid": {
	I0116 02:23:19.136822  991718 command_runner.go:130] >         "value": "0"
	I0116 02:23:19.136836  991718 command_runner.go:130] >       },
	I0116 02:23:19.136847  991718 command_runner.go:130] >       "username": "",
	I0116 02:23:19.136853  991718 command_runner.go:130] >       "spec": null,
	I0116 02:23:19.136863  991718 command_runner.go:130] >       "pinned": false
	I0116 02:23:19.136869  991718 command_runner.go:130] >     },
	I0116 02:23:19.136877  991718 command_runner.go:130] >     {
	I0116 02:23:19.136893  991718 command_runner.go:130] >       "id": "83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e",
	I0116 02:23:19.136903  991718 command_runner.go:130] >       "repoTags": [
	I0116 02:23:19.136911  991718 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.4"
	I0116 02:23:19.136919  991718 command_runner.go:130] >       ],
	I0116 02:23:19.136926  991718 command_runner.go:130] >       "repoDigests": [
	I0116 02:23:19.136944  991718 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e",
	I0116 02:23:19.136959  991718 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"
	I0116 02:23:19.136968  991718 command_runner.go:130] >       ],
	I0116 02:23:19.136976  991718 command_runner.go:130] >       "size": "74749335",
	I0116 02:23:19.136985  991718 command_runner.go:130] >       "uid": null,
	I0116 02:23:19.136992  991718 command_runner.go:130] >       "username": "",
	I0116 02:23:19.137002  991718 command_runner.go:130] >       "spec": null,
	I0116 02:23:19.137013  991718 command_runner.go:130] >       "pinned": false
	I0116 02:23:19.137022  991718 command_runner.go:130] >     },
	I0116 02:23:19.137028  991718 command_runner.go:130] >     {
	I0116 02:23:19.137038  991718 command_runner.go:130] >       "id": "e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1",
	I0116 02:23:19.137047  991718 command_runner.go:130] >       "repoTags": [
	I0116 02:23:19.137056  991718 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.4"
	I0116 02:23:19.137068  991718 command_runner.go:130] >       ],
	I0116 02:23:19.137077  991718 command_runner.go:130] >       "repoDigests": [
	I0116 02:23:19.137113  991718 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba",
	I0116 02:23:19.137130  991718 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32"
	I0116 02:23:19.137136  991718 command_runner.go:130] >       ],
	I0116 02:23:19.137143  991718 command_runner.go:130] >       "size": "61551410",
	I0116 02:23:19.137153  991718 command_runner.go:130] >       "uid": {
	I0116 02:23:19.137161  991718 command_runner.go:130] >         "value": "0"
	I0116 02:23:19.137170  991718 command_runner.go:130] >       },
	I0116 02:23:19.137178  991718 command_runner.go:130] >       "username": "",
	I0116 02:23:19.137188  991718 command_runner.go:130] >       "spec": null,
	I0116 02:23:19.137196  991718 command_runner.go:130] >       "pinned": false
	I0116 02:23:19.137205  991718 command_runner.go:130] >     },
	I0116 02:23:19.137209  991718 command_runner.go:130] >     {
	I0116 02:23:19.137216  991718 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0116 02:23:19.137223  991718 command_runner.go:130] >       "repoTags": [
	I0116 02:23:19.137228  991718 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0116 02:23:19.137232  991718 command_runner.go:130] >       ],
	I0116 02:23:19.137239  991718 command_runner.go:130] >       "repoDigests": [
	I0116 02:23:19.137246  991718 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0116 02:23:19.137255  991718 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0116 02:23:19.137259  991718 command_runner.go:130] >       ],
	I0116 02:23:19.137264  991718 command_runner.go:130] >       "size": "750414",
	I0116 02:23:19.137270  991718 command_runner.go:130] >       "uid": {
	I0116 02:23:19.137274  991718 command_runner.go:130] >         "value": "65535"
	I0116 02:23:19.137280  991718 command_runner.go:130] >       },
	I0116 02:23:19.137284  991718 command_runner.go:130] >       "username": "",
	I0116 02:23:19.137288  991718 command_runner.go:130] >       "spec": null,
	I0116 02:23:19.137293  991718 command_runner.go:130] >       "pinned": false
	I0116 02:23:19.137297  991718 command_runner.go:130] >     }
	I0116 02:23:19.137301  991718 command_runner.go:130] >   ]
	I0116 02:23:19.137307  991718 command_runner.go:130] > }
	I0116 02:23:19.137433  991718 crio.go:496] all images are preloaded for cri-o runtime.
	I0116 02:23:19.137446  991718 cache_images.go:84] Images are preloaded, skipping loading
	I0116 02:23:19.137513  991718 ssh_runner.go:195] Run: crio config
	I0116 02:23:19.193611  991718 command_runner.go:130] ! time="2024-01-16 02:23:19.184895661Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I0116 02:23:19.193645  991718 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0116 02:23:19.201523  991718 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0116 02:23:19.201557  991718 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0116 02:23:19.201567  991718 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0116 02:23:19.201573  991718 command_runner.go:130] > #
	I0116 02:23:19.201582  991718 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0116 02:23:19.201590  991718 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0116 02:23:19.201598  991718 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0116 02:23:19.201612  991718 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0116 02:23:19.201618  991718 command_runner.go:130] > # reload'.
	I0116 02:23:19.201629  991718 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0116 02:23:19.201644  991718 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0116 02:23:19.201660  991718 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0116 02:23:19.201678  991718 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0116 02:23:19.201683  991718 command_runner.go:130] > [crio]
	I0116 02:23:19.201694  991718 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0116 02:23:19.201704  991718 command_runner.go:130] > # containers images, in this directory.
	I0116 02:23:19.201716  991718 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0116 02:23:19.201740  991718 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0116 02:23:19.201754  991718 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0116 02:23:19.201766  991718 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0116 02:23:19.201786  991718 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0116 02:23:19.201797  991718 command_runner.go:130] > storage_driver = "overlay"
	I0116 02:23:19.201818  991718 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0116 02:23:19.201831  991718 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0116 02:23:19.201840  991718 command_runner.go:130] > storage_option = [
	I0116 02:23:19.201850  991718 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0116 02:23:19.201856  991718 command_runner.go:130] > ]
	I0116 02:23:19.201866  991718 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0116 02:23:19.201881  991718 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0116 02:23:19.201898  991718 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0116 02:23:19.201912  991718 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0116 02:23:19.201927  991718 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0116 02:23:19.201937  991718 command_runner.go:130] > # always happen on a node reboot
	I0116 02:23:19.201947  991718 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0116 02:23:19.201960  991718 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0116 02:23:19.201973  991718 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0116 02:23:19.201996  991718 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0116 02:23:19.202009  991718 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0116 02:23:19.202025  991718 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0116 02:23:19.202042  991718 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0116 02:23:19.202052  991718 command_runner.go:130] > # internal_wipe = true
	I0116 02:23:19.202065  991718 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0116 02:23:19.202080  991718 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0116 02:23:19.202093  991718 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0116 02:23:19.202106  991718 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0116 02:23:19.202120  991718 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0116 02:23:19.202128  991718 command_runner.go:130] > [crio.api]
	I0116 02:23:19.202142  991718 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0116 02:23:19.202154  991718 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0116 02:23:19.202167  991718 command_runner.go:130] > # IP address on which the stream server will listen.
	I0116 02:23:19.202185  991718 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0116 02:23:19.202200  991718 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0116 02:23:19.202213  991718 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0116 02:23:19.202222  991718 command_runner.go:130] > # stream_port = "0"
	I0116 02:23:19.202232  991718 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0116 02:23:19.202247  991718 command_runner.go:130] > # stream_enable_tls = false
	I0116 02:23:19.202264  991718 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0116 02:23:19.202275  991718 command_runner.go:130] > # stream_idle_timeout = ""
	I0116 02:23:19.202286  991718 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0116 02:23:19.202300  991718 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0116 02:23:19.202309  991718 command_runner.go:130] > # minutes.
	I0116 02:23:19.202317  991718 command_runner.go:130] > # stream_tls_cert = ""
	I0116 02:23:19.202331  991718 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0116 02:23:19.202345  991718 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0116 02:23:19.202356  991718 command_runner.go:130] > # stream_tls_key = ""
	I0116 02:23:19.202423  991718 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0116 02:23:19.202445  991718 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0116 02:23:19.202455  991718 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0116 02:23:19.202467  991718 command_runner.go:130] > # stream_tls_ca = ""
	I0116 02:23:19.202484  991718 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0116 02:23:19.202495  991718 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0116 02:23:19.202515  991718 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0116 02:23:19.202526  991718 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0116 02:23:19.202563  991718 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0116 02:23:19.202576  991718 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0116 02:23:19.202587  991718 command_runner.go:130] > [crio.runtime]
	I0116 02:23:19.202599  991718 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0116 02:23:19.202612  991718 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0116 02:23:19.202622  991718 command_runner.go:130] > # "nofile=1024:2048"
	I0116 02:23:19.202633  991718 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0116 02:23:19.202643  991718 command_runner.go:130] > # default_ulimits = [
	I0116 02:23:19.202652  991718 command_runner.go:130] > # ]
	I0116 02:23:19.202664  991718 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0116 02:23:19.202678  991718 command_runner.go:130] > # no_pivot = false
	I0116 02:23:19.202690  991718 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0116 02:23:19.202705  991718 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0116 02:23:19.202716  991718 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0116 02:23:19.202727  991718 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0116 02:23:19.202739  991718 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0116 02:23:19.202755  991718 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0116 02:23:19.202766  991718 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0116 02:23:19.202777  991718 command_runner.go:130] > # Cgroup setting for conmon
	I0116 02:23:19.202795  991718 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0116 02:23:19.202805  991718 command_runner.go:130] > conmon_cgroup = "pod"
	I0116 02:23:19.202817  991718 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0116 02:23:19.202829  991718 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0116 02:23:19.202845  991718 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0116 02:23:19.202855  991718 command_runner.go:130] > conmon_env = [
	I0116 02:23:19.202869  991718 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0116 02:23:19.202878  991718 command_runner.go:130] > ]
	I0116 02:23:19.202891  991718 command_runner.go:130] > # Additional environment variables to set for all the
	I0116 02:23:19.202909  991718 command_runner.go:130] > # containers. These are overridden if set in the
	I0116 02:23:19.202923  991718 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0116 02:23:19.202933  991718 command_runner.go:130] > # default_env = [
	I0116 02:23:19.202943  991718 command_runner.go:130] > # ]
	I0116 02:23:19.202955  991718 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0116 02:23:19.202965  991718 command_runner.go:130] > # selinux = false
	I0116 02:23:19.202980  991718 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0116 02:23:19.202993  991718 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0116 02:23:19.203008  991718 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0116 02:23:19.203018  991718 command_runner.go:130] > # seccomp_profile = ""
	I0116 02:23:19.203029  991718 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0116 02:23:19.203042  991718 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0116 02:23:19.203057  991718 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0116 02:23:19.203068  991718 command_runner.go:130] > # which might increase security.
	I0116 02:23:19.203079  991718 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0116 02:23:19.203091  991718 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0116 02:23:19.203105  991718 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0116 02:23:19.203119  991718 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0116 02:23:19.203137  991718 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0116 02:23:19.203147  991718 command_runner.go:130] > # This option supports live configuration reload.
	I0116 02:23:19.203154  991718 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0116 02:23:19.203164  991718 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0116 02:23:19.203195  991718 command_runner.go:130] > # the cgroup blockio controller.
	I0116 02:23:19.203207  991718 command_runner.go:130] > # blockio_config_file = ""
	I0116 02:23:19.203222  991718 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0116 02:23:19.203232  991718 command_runner.go:130] > # irqbalance daemon.
	I0116 02:23:19.203245  991718 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0116 02:23:19.203260  991718 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0116 02:23:19.203272  991718 command_runner.go:130] > # This option supports live configuration reload.
	I0116 02:23:19.203280  991718 command_runner.go:130] > # rdt_config_file = ""
	I0116 02:23:19.203293  991718 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0116 02:23:19.203303  991718 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0116 02:23:19.203318  991718 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0116 02:23:19.203329  991718 command_runner.go:130] > # separate_pull_cgroup = ""
	I0116 02:23:19.203361  991718 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0116 02:23:19.203379  991718 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0116 02:23:19.203396  991718 command_runner.go:130] > # will be added.
	I0116 02:23:19.203407  991718 command_runner.go:130] > # default_capabilities = [
	I0116 02:23:19.203417  991718 command_runner.go:130] > # 	"CHOWN",
	I0116 02:23:19.203425  991718 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0116 02:23:19.203436  991718 command_runner.go:130] > # 	"FSETID",
	I0116 02:23:19.203446  991718 command_runner.go:130] > # 	"FOWNER",
	I0116 02:23:19.203455  991718 command_runner.go:130] > # 	"SETGID",
	I0116 02:23:19.203465  991718 command_runner.go:130] > # 	"SETUID",
	I0116 02:23:19.203474  991718 command_runner.go:130] > # 	"SETPCAP",
	I0116 02:23:19.203482  991718 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0116 02:23:19.203492  991718 command_runner.go:130] > # 	"KILL",
	I0116 02:23:19.203500  991718 command_runner.go:130] > # ]
	I0116 02:23:19.203515  991718 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0116 02:23:19.203529  991718 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0116 02:23:19.203539  991718 command_runner.go:130] > # default_sysctls = [
	I0116 02:23:19.203546  991718 command_runner.go:130] > # ]
	I0116 02:23:19.203558  991718 command_runner.go:130] > # List of devices on the host that a
	I0116 02:23:19.203572  991718 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0116 02:23:19.203587  991718 command_runner.go:130] > # allowed_devices = [
	I0116 02:23:19.203598  991718 command_runner.go:130] > # 	"/dev/fuse",
	I0116 02:23:19.203606  991718 command_runner.go:130] > # ]
	I0116 02:23:19.203616  991718 command_runner.go:130] > # List of additional devices. specified as
	I0116 02:23:19.203632  991718 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0116 02:23:19.203645  991718 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0116 02:23:19.203695  991718 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0116 02:23:19.203706  991718 command_runner.go:130] > # additional_devices = [
	I0116 02:23:19.203715  991718 command_runner.go:130] > # ]
	I0116 02:23:19.203725  991718 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0116 02:23:19.203734  991718 command_runner.go:130] > # cdi_spec_dirs = [
	I0116 02:23:19.203741  991718 command_runner.go:130] > # 	"/etc/cdi",
	I0116 02:23:19.203749  991718 command_runner.go:130] > # 	"/var/run/cdi",
	I0116 02:23:19.203758  991718 command_runner.go:130] > # ]
	I0116 02:23:19.203770  991718 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0116 02:23:19.203784  991718 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0116 02:23:19.203795  991718 command_runner.go:130] > # Defaults to false.
	I0116 02:23:19.203807  991718 command_runner.go:130] > # device_ownership_from_security_context = false
	I0116 02:23:19.203825  991718 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0116 02:23:19.203839  991718 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0116 02:23:19.203847  991718 command_runner.go:130] > # hooks_dir = [
	I0116 02:23:19.203859  991718 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0116 02:23:19.203867  991718 command_runner.go:130] > # ]
	I0116 02:23:19.203885  991718 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0116 02:23:19.203899  991718 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0116 02:23:19.203915  991718 command_runner.go:130] > # its default mounts from the following two files:
	I0116 02:23:19.203923  991718 command_runner.go:130] > #
	I0116 02:23:19.203934  991718 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0116 02:23:19.203948  991718 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0116 02:23:19.203962  991718 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0116 02:23:19.203971  991718 command_runner.go:130] > #
	I0116 02:23:19.203982  991718 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0116 02:23:19.203997  991718 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0116 02:23:19.204011  991718 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0116 02:23:19.204023  991718 command_runner.go:130] > #      only add mounts it finds in this file.
	I0116 02:23:19.204029  991718 command_runner.go:130] > #
	I0116 02:23:19.204043  991718 command_runner.go:130] > # default_mounts_file = ""
	I0116 02:23:19.204057  991718 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0116 02:23:19.204071  991718 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0116 02:23:19.204082  991718 command_runner.go:130] > pids_limit = 1024
	I0116 02:23:19.204096  991718 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0116 02:23:19.204110  991718 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0116 02:23:19.204124  991718 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0116 02:23:19.204141  991718 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0116 02:23:19.204156  991718 command_runner.go:130] > # log_size_max = -1
	I0116 02:23:19.204177  991718 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0116 02:23:19.204189  991718 command_runner.go:130] > # log_to_journald = false
	I0116 02:23:19.204200  991718 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0116 02:23:19.204213  991718 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0116 02:23:19.204224  991718 command_runner.go:130] > # Path to directory for container attach sockets.
	I0116 02:23:19.204234  991718 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0116 02:23:19.204246  991718 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0116 02:23:19.204258  991718 command_runner.go:130] > # bind_mount_prefix = ""
	I0116 02:23:19.204271  991718 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0116 02:23:19.204286  991718 command_runner.go:130] > # read_only = false
	I0116 02:23:19.204300  991718 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0116 02:23:19.204314  991718 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0116 02:23:19.204324  991718 command_runner.go:130] > # live configuration reload.
	I0116 02:23:19.204333  991718 command_runner.go:130] > # log_level = "info"
	I0116 02:23:19.204346  991718 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0116 02:23:19.204359  991718 command_runner.go:130] > # This option supports live configuration reload.
	I0116 02:23:19.204369  991718 command_runner.go:130] > # log_filter = ""
	I0116 02:23:19.204384  991718 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0116 02:23:19.204397  991718 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0116 02:23:19.204408  991718 command_runner.go:130] > # separated by comma.
	I0116 02:23:19.204426  991718 command_runner.go:130] > # uid_mappings = ""
	I0116 02:23:19.204441  991718 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0116 02:23:19.204452  991718 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0116 02:23:19.204458  991718 command_runner.go:130] > # separated by comma.
	I0116 02:23:19.204469  991718 command_runner.go:130] > # gid_mappings = ""
	I0116 02:23:19.204483  991718 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0116 02:23:19.204498  991718 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0116 02:23:19.204515  991718 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0116 02:23:19.204526  991718 command_runner.go:130] > # minimum_mappable_uid = -1
	I0116 02:23:19.204538  991718 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0116 02:23:19.204552  991718 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0116 02:23:19.204566  991718 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0116 02:23:19.204577  991718 command_runner.go:130] > # minimum_mappable_gid = -1
	I0116 02:23:19.204588  991718 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0116 02:23:19.204602  991718 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0116 02:23:19.204612  991718 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0116 02:23:19.204623  991718 command_runner.go:130] > # ctr_stop_timeout = 30
	I0116 02:23:19.204634  991718 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0116 02:23:19.204648  991718 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0116 02:23:19.204665  991718 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0116 02:23:19.204676  991718 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0116 02:23:19.204685  991718 command_runner.go:130] > drop_infra_ctr = false
	I0116 02:23:19.204700  991718 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0116 02:23:19.204713  991718 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0116 02:23:19.204729  991718 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0116 02:23:19.204744  991718 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0116 02:23:19.204758  991718 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0116 02:23:19.204770  991718 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0116 02:23:19.204782  991718 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0116 02:23:19.204796  991718 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0116 02:23:19.204808  991718 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0116 02:23:19.204820  991718 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0116 02:23:19.204834  991718 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0116 02:23:19.204848  991718 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0116 02:23:19.204859  991718 command_runner.go:130] > # default_runtime = "runc"
	I0116 02:23:19.204871  991718 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0116 02:23:19.204885  991718 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0116 02:23:19.204903  991718 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0116 02:23:19.204916  991718 command_runner.go:130] > # creation as a file is not desired either.
	I0116 02:23:19.204933  991718 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0116 02:23:19.204945  991718 command_runner.go:130] > # the hostname is being managed dynamically.
	I0116 02:23:19.204957  991718 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0116 02:23:19.204966  991718 command_runner.go:130] > # ]
	I0116 02:23:19.204981  991718 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0116 02:23:19.204996  991718 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0116 02:23:19.205011  991718 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0116 02:23:19.205025  991718 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0116 02:23:19.205035  991718 command_runner.go:130] > #
	I0116 02:23:19.205046  991718 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0116 02:23:19.205057  991718 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0116 02:23:19.205067  991718 command_runner.go:130] > #  runtime_type = "oci"
	I0116 02:23:19.205077  991718 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0116 02:23:19.205089  991718 command_runner.go:130] > #  privileged_without_host_devices = false
	I0116 02:23:19.205100  991718 command_runner.go:130] > #  allowed_annotations = []
	I0116 02:23:19.205110  991718 command_runner.go:130] > # Where:
	I0116 02:23:19.205123  991718 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0116 02:23:19.205137  991718 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0116 02:23:19.205151  991718 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0116 02:23:19.205165  991718 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0116 02:23:19.205181  991718 command_runner.go:130] > #   in $PATH.
	I0116 02:23:19.205193  991718 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0116 02:23:19.205210  991718 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0116 02:23:19.205226  991718 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0116 02:23:19.205236  991718 command_runner.go:130] > #   state.
	I0116 02:23:19.205251  991718 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0116 02:23:19.205265  991718 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0116 02:23:19.205279  991718 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0116 02:23:19.205293  991718 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0116 02:23:19.205307  991718 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0116 02:23:19.205322  991718 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0116 02:23:19.205334  991718 command_runner.go:130] > #   The currently recognized values are:
	I0116 02:23:19.205349  991718 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0116 02:23:19.205365  991718 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0116 02:23:19.205378  991718 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0116 02:23:19.205392  991718 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0116 02:23:19.205409  991718 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0116 02:23:19.205423  991718 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0116 02:23:19.205437  991718 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0116 02:23:19.205451  991718 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0116 02:23:19.205468  991718 command_runner.go:130] > #   should be moved to the container's cgroup
	I0116 02:23:19.205479  991718 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0116 02:23:19.205485  991718 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0116 02:23:19.205492  991718 command_runner.go:130] > runtime_type = "oci"
	I0116 02:23:19.205503  991718 command_runner.go:130] > runtime_root = "/run/runc"
	I0116 02:23:19.205510  991718 command_runner.go:130] > runtime_config_path = ""
	I0116 02:23:19.205518  991718 command_runner.go:130] > monitor_path = ""
	I0116 02:23:19.205528  991718 command_runner.go:130] > monitor_cgroup = ""
	I0116 02:23:19.205536  991718 command_runner.go:130] > monitor_exec_cgroup = ""
	I0116 02:23:19.205551  991718 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0116 02:23:19.205561  991718 command_runner.go:130] > # running containers
	I0116 02:23:19.205572  991718 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0116 02:23:19.205584  991718 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0116 02:23:19.205652  991718 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0116 02:23:19.205665  991718 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0116 02:23:19.205673  991718 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0116 02:23:19.205680  991718 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0116 02:23:19.205689  991718 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0116 02:23:19.205705  991718 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0116 02:23:19.205718  991718 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0116 02:23:19.205727  991718 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0116 02:23:19.205741  991718 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0116 02:23:19.205754  991718 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0116 02:23:19.205769  991718 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0116 02:23:19.205785  991718 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0116 02:23:19.205816  991718 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0116 02:23:19.205829  991718 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0116 02:23:19.205848  991718 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0116 02:23:19.205869  991718 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0116 02:23:19.205886  991718 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0116 02:23:19.205902  991718 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0116 02:23:19.205912  991718 command_runner.go:130] > # Example:
	I0116 02:23:19.205923  991718 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0116 02:23:19.205935  991718 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0116 02:23:19.205947  991718 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0116 02:23:19.205960  991718 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0116 02:23:19.205973  991718 command_runner.go:130] > # cpuset = 0
	I0116 02:23:19.205983  991718 command_runner.go:130] > # cpushares = "0-1"
	I0116 02:23:19.205990  991718 command_runner.go:130] > # Where:
	I0116 02:23:19.206002  991718 command_runner.go:130] > # The workload name is workload-type.
	I0116 02:23:19.206018  991718 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0116 02:23:19.206031  991718 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0116 02:23:19.206044  991718 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0116 02:23:19.206060  991718 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0116 02:23:19.206074  991718 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0116 02:23:19.206081  991718 command_runner.go:130] > # 
	I0116 02:23:19.206096  991718 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0116 02:23:19.206105  991718 command_runner.go:130] > #
	I0116 02:23:19.206116  991718 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0116 02:23:19.206135  991718 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0116 02:23:19.206149  991718 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0116 02:23:19.206163  991718 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0116 02:23:19.206183  991718 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0116 02:23:19.206193  991718 command_runner.go:130] > [crio.image]
	I0116 02:23:19.206211  991718 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0116 02:23:19.206222  991718 command_runner.go:130] > # default_transport = "docker://"
	I0116 02:23:19.206236  991718 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0116 02:23:19.206250  991718 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0116 02:23:19.206260  991718 command_runner.go:130] > # global_auth_file = ""
	I0116 02:23:19.206271  991718 command_runner.go:130] > # The image used to instantiate infra containers.
	I0116 02:23:19.206279  991718 command_runner.go:130] > # This option supports live configuration reload.
	I0116 02:23:19.206286  991718 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0116 02:23:19.206296  991718 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0116 02:23:19.206304  991718 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0116 02:23:19.206311  991718 command_runner.go:130] > # This option supports live configuration reload.
	I0116 02:23:19.206317  991718 command_runner.go:130] > # pause_image_auth_file = ""
	I0116 02:23:19.206327  991718 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0116 02:23:19.206337  991718 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0116 02:23:19.206347  991718 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0116 02:23:19.206358  991718 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0116 02:23:19.206365  991718 command_runner.go:130] > # pause_command = "/pause"
	I0116 02:23:19.206375  991718 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0116 02:23:19.206390  991718 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0116 02:23:19.206402  991718 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0116 02:23:19.206412  991718 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0116 02:23:19.206422  991718 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0116 02:23:19.206431  991718 command_runner.go:130] > # signature_policy = ""
	I0116 02:23:19.206439  991718 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0116 02:23:19.206449  991718 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0116 02:23:19.206457  991718 command_runner.go:130] > # changing them here.
	I0116 02:23:19.206465  991718 command_runner.go:130] > # insecure_registries = [
	I0116 02:23:19.206471  991718 command_runner.go:130] > # ]
	I0116 02:23:19.206486  991718 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0116 02:23:19.206496  991718 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0116 02:23:19.206503  991718 command_runner.go:130] > # image_volumes = "mkdir"
	I0116 02:23:19.206512  991718 command_runner.go:130] > # Temporary directory to use for storing big files
	I0116 02:23:19.206520  991718 command_runner.go:130] > # big_files_temporary_dir = ""
	I0116 02:23:19.206529  991718 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0116 02:23:19.206536  991718 command_runner.go:130] > # CNI plugins.
	I0116 02:23:19.206548  991718 command_runner.go:130] > [crio.network]
	I0116 02:23:19.206565  991718 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0116 02:23:19.206578  991718 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0116 02:23:19.206586  991718 command_runner.go:130] > # cni_default_network = ""
	I0116 02:23:19.206599  991718 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0116 02:23:19.206612  991718 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0116 02:23:19.206625  991718 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0116 02:23:19.206636  991718 command_runner.go:130] > # plugin_dirs = [
	I0116 02:23:19.206644  991718 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0116 02:23:19.206653  991718 command_runner.go:130] > # ]
	I0116 02:23:19.206664  991718 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0116 02:23:19.206673  991718 command_runner.go:130] > [crio.metrics]
	I0116 02:23:19.206682  991718 command_runner.go:130] > # Globally enable or disable metrics support.
	I0116 02:23:19.206692  991718 command_runner.go:130] > enable_metrics = true
	I0116 02:23:19.206704  991718 command_runner.go:130] > # Specify enabled metrics collectors.
	I0116 02:23:19.206716  991718 command_runner.go:130] > # Per default all metrics are enabled.
	I0116 02:23:19.206731  991718 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0116 02:23:19.206745  991718 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0116 02:23:19.206758  991718 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0116 02:23:19.206772  991718 command_runner.go:130] > # metrics_collectors = [
	I0116 02:23:19.206782  991718 command_runner.go:130] > # 	"operations",
	I0116 02:23:19.206794  991718 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0116 02:23:19.206806  991718 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0116 02:23:19.206817  991718 command_runner.go:130] > # 	"operations_errors",
	I0116 02:23:19.206828  991718 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0116 02:23:19.206836  991718 command_runner.go:130] > # 	"image_pulls_by_name",
	I0116 02:23:19.206847  991718 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0116 02:23:19.206855  991718 command_runner.go:130] > # 	"image_pulls_failures",
	I0116 02:23:19.206863  991718 command_runner.go:130] > # 	"image_pulls_successes",
	I0116 02:23:19.206874  991718 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0116 02:23:19.206882  991718 command_runner.go:130] > # 	"image_layer_reuse",
	I0116 02:23:19.206893  991718 command_runner.go:130] > # 	"containers_oom_total",
	I0116 02:23:19.206902  991718 command_runner.go:130] > # 	"containers_oom",
	I0116 02:23:19.206914  991718 command_runner.go:130] > # 	"processes_defunct",
	I0116 02:23:19.206924  991718 command_runner.go:130] > # 	"operations_total",
	I0116 02:23:19.206934  991718 command_runner.go:130] > # 	"operations_latency_seconds",
	I0116 02:23:19.206944  991718 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0116 02:23:19.206957  991718 command_runner.go:130] > # 	"operations_errors_total",
	I0116 02:23:19.206969  991718 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0116 02:23:19.206978  991718 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0116 02:23:19.206990  991718 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0116 02:23:19.207001  991718 command_runner.go:130] > # 	"image_pulls_success_total",
	I0116 02:23:19.207010  991718 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0116 02:23:19.207021  991718 command_runner.go:130] > # 	"containers_oom_count_total",
	I0116 02:23:19.207030  991718 command_runner.go:130] > # ]
	I0116 02:23:19.207040  991718 command_runner.go:130] > # The port on which the metrics server will listen.
	I0116 02:23:19.207051  991718 command_runner.go:130] > # metrics_port = 9090
	I0116 02:23:19.207064  991718 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0116 02:23:19.207074  991718 command_runner.go:130] > # metrics_socket = ""
	I0116 02:23:19.207085  991718 command_runner.go:130] > # The certificate for the secure metrics server.
	I0116 02:23:19.207099  991718 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0116 02:23:19.207113  991718 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0116 02:23:19.207124  991718 command_runner.go:130] > # certificate on any modification event.
	I0116 02:23:19.207132  991718 command_runner.go:130] > # metrics_cert = ""
	I0116 02:23:19.207144  991718 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0116 02:23:19.207161  991718 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0116 02:23:19.207178  991718 command_runner.go:130] > # metrics_key = ""
	I0116 02:23:19.207191  991718 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0116 02:23:19.207202  991718 command_runner.go:130] > [crio.tracing]
	I0116 02:23:19.207215  991718 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0116 02:23:19.207223  991718 command_runner.go:130] > # enable_tracing = false
	I0116 02:23:19.207234  991718 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0116 02:23:19.207240  991718 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0116 02:23:19.207247  991718 command_runner.go:130] > # Number of samples to collect per million spans.
	I0116 02:23:19.207254  991718 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0116 02:23:19.207263  991718 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0116 02:23:19.207268  991718 command_runner.go:130] > [crio.stats]
	I0116 02:23:19.207285  991718 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0116 02:23:19.207298  991718 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0116 02:23:19.207308  991718 command_runner.go:130] > # stats_collection_period = 0
	I0116 02:23:19.207432  991718 cni.go:84] Creating CNI manager for ""
	I0116 02:23:19.207446  991718 cni.go:136] 1 nodes found, recommending kindnet
	I0116 02:23:19.207481  991718 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0116 02:23:19.207518  991718 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.50 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-835787 NodeName:multinode-835787 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.50"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.50 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0116 02:23:19.207692  991718 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.50
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-835787"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.50
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.50"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0116 02:23:19.207784  991718 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-835787 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.50
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-835787 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0116 02:23:19.207858  991718 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0116 02:23:19.218018  991718 command_runner.go:130] > kubeadm
	I0116 02:23:19.218056  991718 command_runner.go:130] > kubectl
	I0116 02:23:19.218063  991718 command_runner.go:130] > kubelet
	I0116 02:23:19.218097  991718 binaries.go:44] Found k8s binaries, skipping transfer
	I0116 02:23:19.218189  991718 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0116 02:23:19.228721  991718 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I0116 02:23:19.246950  991718 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0116 02:23:19.264882  991718 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2100 bytes)
	I0116 02:23:19.282771  991718 ssh_runner.go:195] Run: grep 192.168.39.50	control-plane.minikube.internal$ /etc/hosts
	I0116 02:23:19.286783  991718 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.50	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 02:23:19.300361  991718 certs.go:56] Setting up /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/multinode-835787 for IP: 192.168.39.50
	I0116 02:23:19.300422  991718 certs.go:190] acquiring lock for shared ca certs: {Name:mk7c5920652340a030ac1e36e0fe21cc8437c491 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:23:19.300638  991718 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17967-971255/.minikube/ca.key
	I0116 02:23:19.300691  991718 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17967-971255/.minikube/proxy-client-ca.key
	I0116 02:23:19.300744  991718 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/multinode-835787/client.key
	I0116 02:23:19.300756  991718 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/multinode-835787/client.crt with IP's: []
	I0116 02:23:19.412346  991718 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/multinode-835787/client.crt ...
	I0116 02:23:19.412382  991718 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/multinode-835787/client.crt: {Name:mk473d436cfdc57d1c71951ff4291d1f16650b9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:23:19.412583  991718 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/multinode-835787/client.key ...
	I0116 02:23:19.412604  991718 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/multinode-835787/client.key: {Name:mke7806db8d8d9f5a94efa60ed32015ae15a1bbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:23:19.412703  991718 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/multinode-835787/apiserver.key.59dcb911
	I0116 02:23:19.412721  991718 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/multinode-835787/apiserver.crt.59dcb911 with IP's: [192.168.39.50 10.96.0.1 127.0.0.1 10.0.0.1]
	I0116 02:23:19.815124  991718 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/multinode-835787/apiserver.crt.59dcb911 ...
	I0116 02:23:19.815168  991718 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/multinode-835787/apiserver.crt.59dcb911: {Name:mke49c19f576d7766da351006bae9347afc13363 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:23:19.815375  991718 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/multinode-835787/apiserver.key.59dcb911 ...
	I0116 02:23:19.815394  991718 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/multinode-835787/apiserver.key.59dcb911: {Name:mk3781dffbd629e981f06f53c62cbb061c8f14e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:23:19.815491  991718 certs.go:337] copying /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/multinode-835787/apiserver.crt.59dcb911 -> /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/multinode-835787/apiserver.crt
	I0116 02:23:19.815602  991718 certs.go:341] copying /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/multinode-835787/apiserver.key.59dcb911 -> /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/multinode-835787/apiserver.key
	I0116 02:23:19.815684  991718 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/multinode-835787/proxy-client.key
	I0116 02:23:19.815708  991718 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/multinode-835787/proxy-client.crt with IP's: []
	I0116 02:23:20.089423  991718 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/multinode-835787/proxy-client.crt ...
	I0116 02:23:20.089489  991718 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/multinode-835787/proxy-client.crt: {Name:mk35fe7814616d9c60a2e2413c39cfde597ba8a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:23:20.089714  991718 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/multinode-835787/proxy-client.key ...
	I0116 02:23:20.089736  991718 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/multinode-835787/proxy-client.key: {Name:mk2f73101c9e7b5936aab53552fce1a68edf5d6b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:23:20.089853  991718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/multinode-835787/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0116 02:23:20.089885  991718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/multinode-835787/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0116 02:23:20.089895  991718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/multinode-835787/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0116 02:23:20.089905  991718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/multinode-835787/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0116 02:23:20.089918  991718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-971255/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0116 02:23:20.089933  991718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-971255/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0116 02:23:20.089946  991718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-971255/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0116 02:23:20.089958  991718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-971255/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0116 02:23:20.090015  991718 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/home/jenkins/minikube-integration/17967-971255/.minikube/certs/978482.pem (1338 bytes)
	W0116 02:23:20.090055  991718 certs.go:433] ignoring /home/jenkins/minikube-integration/17967-971255/.minikube/certs/home/jenkins/minikube-integration/17967-971255/.minikube/certs/978482_empty.pem, impossibly tiny 0 bytes
	I0116 02:23:20.090066  991718 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca-key.pem (1675 bytes)
	I0116 02:23:20.090093  991718 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca.pem (1082 bytes)
	I0116 02:23:20.090118  991718 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/home/jenkins/minikube-integration/17967-971255/.minikube/certs/cert.pem (1123 bytes)
	I0116 02:23:20.090140  991718 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/home/jenkins/minikube-integration/17967-971255/.minikube/certs/key.pem (1675 bytes)
	I0116 02:23:20.090184  991718 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-971255/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17967-971255/.minikube/files/etc/ssl/certs/9784822.pem (1708 bytes)
	I0116 02:23:20.090211  991718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/978482.pem -> /usr/share/ca-certificates/978482.pem
	I0116 02:23:20.090226  991718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-971255/.minikube/files/etc/ssl/certs/9784822.pem -> /usr/share/ca-certificates/9784822.pem
	I0116 02:23:20.090236  991718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-971255/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0116 02:23:20.090795  991718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/multinode-835787/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0116 02:23:20.118981  991718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/multinode-835787/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0116 02:23:20.144847  991718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/multinode-835787/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0116 02:23:20.170429  991718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/multinode-835787/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0116 02:23:20.196809  991718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0116 02:23:20.222640  991718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0116 02:23:20.248816  991718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0116 02:23:20.275607  991718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0116 02:23:20.301382  991718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/certs/978482.pem --> /usr/share/ca-certificates/978482.pem (1338 bytes)
	I0116 02:23:20.326841  991718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/files/etc/ssl/certs/9784822.pem --> /usr/share/ca-certificates/9784822.pem (1708 bytes)
	I0116 02:23:20.352901  991718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0116 02:23:20.378646  991718 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0116 02:23:20.397156  991718 ssh_runner.go:195] Run: openssl version
	I0116 02:23:20.402888  991718 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0116 02:23:20.403237  991718 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9784822.pem && ln -fs /usr/share/ca-certificates/9784822.pem /etc/ssl/certs/9784822.pem"
	I0116 02:23:20.414299  991718 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9784822.pem
	I0116 02:23:20.419556  991718 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jan 16 02:10 /usr/share/ca-certificates/9784822.pem
	I0116 02:23:20.419593  991718 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 16 02:10 /usr/share/ca-certificates/9784822.pem
	I0116 02:23:20.419652  991718 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9784822.pem
	I0116 02:23:20.425363  991718 command_runner.go:130] > 3ec20f2e
	I0116 02:23:20.425591  991718 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9784822.pem /etc/ssl/certs/3ec20f2e.0"
	I0116 02:23:20.435986  991718 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0116 02:23:20.447786  991718 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0116 02:23:20.454271  991718 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jan 16 02:01 /usr/share/ca-certificates/minikubeCA.pem
	I0116 02:23:20.454311  991718 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 16 02:01 /usr/share/ca-certificates/minikubeCA.pem
	I0116 02:23:20.454378  991718 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0116 02:23:20.460650  991718 command_runner.go:130] > b5213941
	I0116 02:23:20.460774  991718 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0116 02:23:20.471630  991718 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/978482.pem && ln -fs /usr/share/ca-certificates/978482.pem /etc/ssl/certs/978482.pem"
	I0116 02:23:20.482308  991718 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/978482.pem
	I0116 02:23:20.487491  991718 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jan 16 02:10 /usr/share/ca-certificates/978482.pem
	I0116 02:23:20.487530  991718 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 16 02:10 /usr/share/ca-certificates/978482.pem
	I0116 02:23:20.487586  991718 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/978482.pem
	I0116 02:23:20.493408  991718 command_runner.go:130] > 51391683
	I0116 02:23:20.493627  991718 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/978482.pem /etc/ssl/certs/51391683.0"
	I0116 02:23:20.504146  991718 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0116 02:23:20.508639  991718 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0116 02:23:20.508744  991718 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0116 02:23:20.508806  991718 kubeadm.go:404] StartCluster: {Name:multinode-835787 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.28.4 ClusterName:multinode-835787 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.50 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 02:23:20.508887  991718 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0116 02:23:20.508960  991718 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 02:23:20.552178  991718 cri.go:89] found id: ""
	I0116 02:23:20.552270  991718 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0116 02:23:20.561849  991718 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0116 02:23:20.561895  991718 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0116 02:23:20.561905  991718 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0116 02:23:20.562008  991718 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0116 02:23:20.571019  991718 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0116 02:23:20.580046  991718 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0116 02:23:20.580079  991718 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0116 02:23:20.580088  991718 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0116 02:23:20.580096  991718 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0116 02:23:20.580140  991718 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0116 02:23:20.580182  991718 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0116 02:23:20.700004  991718 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0116 02:23:20.700047  991718 command_runner.go:130] > [init] Using Kubernetes version: v1.28.4
	I0116 02:23:20.700348  991718 kubeadm.go:322] [preflight] Running pre-flight checks
	I0116 02:23:20.700369  991718 command_runner.go:130] > [preflight] Running pre-flight checks
	I0116 02:23:20.964658  991718 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0116 02:23:20.964719  991718 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0116 02:23:20.964819  991718 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0116 02:23:20.964832  991718 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0116 02:23:20.964936  991718 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0116 02:23:20.964947  991718 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0116 02:23:21.223100  991718 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0116 02:23:21.226450  991718 out.go:204]   - Generating certificates and keys ...
	I0116 02:23:21.223257  991718 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0116 02:23:21.226593  991718 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0116 02:23:21.226622  991718 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0116 02:23:21.226681  991718 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0116 02:23:21.226693  991718 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0116 02:23:21.325981  991718 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0116 02:23:21.326028  991718 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0116 02:23:21.457119  991718 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0116 02:23:21.457173  991718 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0116 02:23:21.637480  991718 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0116 02:23:21.637520  991718 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0116 02:23:21.862585  991718 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0116 02:23:21.862620  991718 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0116 02:23:21.985187  991718 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0116 02:23:21.985244  991718 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0116 02:23:21.985421  991718 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-835787] and IPs [192.168.39.50 127.0.0.1 ::1]
	I0116 02:23:21.985438  991718 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-835787] and IPs [192.168.39.50 127.0.0.1 ::1]
	I0116 02:23:22.084132  991718 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0116 02:23:22.084200  991718 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0116 02:23:22.084348  991718 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-835787] and IPs [192.168.39.50 127.0.0.1 ::1]
	I0116 02:23:22.084372  991718 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-835787] and IPs [192.168.39.50 127.0.0.1 ::1]
	I0116 02:23:22.450362  991718 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0116 02:23:22.450399  991718 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0116 02:23:22.628292  991718 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0116 02:23:22.628338  991718 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0116 02:23:22.788490  991718 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0116 02:23:22.788533  991718 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0116 02:23:22.788735  991718 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0116 02:23:22.788758  991718 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0116 02:23:22.917162  991718 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0116 02:23:22.917211  991718 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0116 02:23:23.056833  991718 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0116 02:23:23.056880  991718 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0116 02:23:23.213152  991718 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0116 02:23:23.213220  991718 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0116 02:23:23.297132  991718 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0116 02:23:23.297185  991718 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0116 02:23:23.297915  991718 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0116 02:23:23.297938  991718 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0116 02:23:23.300998  991718 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0116 02:23:23.302901  991718 out.go:204]   - Booting up control plane ...
	I0116 02:23:23.301073  991718 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0116 02:23:23.303009  991718 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0116 02:23:23.303023  991718 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0116 02:23:23.303099  991718 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0116 02:23:23.303110  991718 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0116 02:23:23.303193  991718 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0116 02:23:23.303203  991718 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0116 02:23:23.323150  991718 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0116 02:23:23.323194  991718 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0116 02:23:23.323940  991718 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0116 02:23:23.323955  991718 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0116 02:23:23.324038  991718 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0116 02:23:23.324064  991718 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0116 02:23:23.446643  991718 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0116 02:23:23.446683  991718 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0116 02:23:31.449149  991718 command_runner.go:130] > [apiclient] All control plane components are healthy after 8.004312 seconds
	I0116 02:23:31.449149  991718 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.004312 seconds
	I0116 02:23:31.449294  991718 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0116 02:23:31.449306  991718 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0116 02:23:31.473991  991718 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0116 02:23:31.474045  991718 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0116 02:23:32.021605  991718 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0116 02:23:32.021638  991718 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0116 02:23:32.021811  991718 kubeadm.go:322] [mark-control-plane] Marking the node multinode-835787 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0116 02:23:32.021825  991718 command_runner.go:130] > [mark-control-plane] Marking the node multinode-835787 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0116 02:23:32.541431  991718 kubeadm.go:322] [bootstrap-token] Using token: cs1efw.1wxtwfl5emrgsvkn
	I0116 02:23:32.543225  991718 out.go:204]   - Configuring RBAC rules ...
	I0116 02:23:32.541584  991718 command_runner.go:130] > [bootstrap-token] Using token: cs1efw.1wxtwfl5emrgsvkn
	I0116 02:23:32.543365  991718 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0116 02:23:32.543384  991718 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0116 02:23:32.554405  991718 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0116 02:23:32.554439  991718 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0116 02:23:32.572834  991718 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0116 02:23:32.572864  991718 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0116 02:23:32.577321  991718 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0116 02:23:32.577370  991718 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0116 02:23:32.582373  991718 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0116 02:23:32.582431  991718 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0116 02:23:32.587490  991718 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0116 02:23:32.587523  991718 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0116 02:23:32.607474  991718 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0116 02:23:32.607516  991718 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0116 02:23:32.925032  991718 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0116 02:23:32.925066  991718 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0116 02:23:32.978574  991718 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0116 02:23:32.978631  991718 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0116 02:23:32.978672  991718 kubeadm.go:322] 
	I0116 02:23:32.978737  991718 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0116 02:23:32.978749  991718 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0116 02:23:32.978760  991718 kubeadm.go:322] 
	I0116 02:23:32.978833  991718 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0116 02:23:32.978853  991718 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0116 02:23:32.978878  991718 kubeadm.go:322] 
	I0116 02:23:32.978913  991718 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0116 02:23:32.978923  991718 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0116 02:23:32.978990  991718 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0116 02:23:32.979003  991718 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0116 02:23:32.979071  991718 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0116 02:23:32.979084  991718 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0116 02:23:32.979091  991718 kubeadm.go:322] 
	I0116 02:23:32.979159  991718 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0116 02:23:32.979174  991718 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0116 02:23:32.979179  991718 kubeadm.go:322] 
	I0116 02:23:32.979228  991718 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0116 02:23:32.979235  991718 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0116 02:23:32.979239  991718 kubeadm.go:322] 
	I0116 02:23:32.979278  991718 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0116 02:23:32.979286  991718 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0116 02:23:32.979347  991718 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0116 02:23:32.979356  991718 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0116 02:23:32.979405  991718 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0116 02:23:32.979413  991718 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0116 02:23:32.979417  991718 kubeadm.go:322] 
	I0116 02:23:32.979485  991718 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0116 02:23:32.979494  991718 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0116 02:23:32.979549  991718 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0116 02:23:32.979572  991718 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0116 02:23:32.979603  991718 kubeadm.go:322] 
	I0116 02:23:32.979723  991718 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token cs1efw.1wxtwfl5emrgsvkn \
	I0116 02:23:32.979740  991718 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token cs1efw.1wxtwfl5emrgsvkn \
	I0116 02:23:32.979873  991718 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:2b2d63eda0b757db09586d44c0338fc83976b062d99727312f950d3846e4844b \
	I0116 02:23:32.979885  991718 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:2b2d63eda0b757db09586d44c0338fc83976b062d99727312f950d3846e4844b \
	I0116 02:23:32.979912  991718 kubeadm.go:322] 	--control-plane 
	I0116 02:23:32.979922  991718 command_runner.go:130] > 	--control-plane 
	I0116 02:23:32.979926  991718 kubeadm.go:322] 
	I0116 02:23:32.980048  991718 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0116 02:23:32.980058  991718 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0116 02:23:32.980062  991718 kubeadm.go:322] 
	I0116 02:23:32.980166  991718 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token cs1efw.1wxtwfl5emrgsvkn \
	I0116 02:23:32.980177  991718 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token cs1efw.1wxtwfl5emrgsvkn \
	I0116 02:23:32.980275  991718 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:2b2d63eda0b757db09586d44c0338fc83976b062d99727312f950d3846e4844b 
	I0116 02:23:32.980286  991718 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:2b2d63eda0b757db09586d44c0338fc83976b062d99727312f950d3846e4844b 
	I0116 02:23:32.980428  991718 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0116 02:23:32.980441  991718 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0116 02:23:32.980462  991718 cni.go:84] Creating CNI manager for ""
	I0116 02:23:32.980472  991718 cni.go:136] 1 nodes found, recommending kindnet
	I0116 02:23:32.982544  991718 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0116 02:23:32.984037  991718 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0116 02:23:33.001584  991718 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0116 02:23:33.001610  991718 command_runner.go:130] >   Size: 2685752   	Blocks: 5248       IO Block: 4096   regular file
	I0116 02:23:33.001620  991718 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I0116 02:23:33.001627  991718 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0116 02:23:33.001633  991718 command_runner.go:130] > Access: 2024-01-16 02:23:00.841750044 +0000
	I0116 02:23:33.001638  991718 command_runner.go:130] > Modify: 2023-12-28 22:53:36.000000000 +0000
	I0116 02:23:33.001643  991718 command_runner.go:130] > Change: 2024-01-16 02:22:59.017750044 +0000
	I0116 02:23:33.001647  991718 command_runner.go:130] >  Birth: -
	I0116 02:23:33.001702  991718 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0116 02:23:33.001713  991718 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0116 02:23:33.051278  991718 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0116 02:23:34.129606  991718 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0116 02:23:34.138002  991718 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0116 02:23:34.152099  991718 command_runner.go:130] > serviceaccount/kindnet created
	I0116 02:23:34.167456  991718 command_runner.go:130] > daemonset.apps/kindnet created
	I0116 02:23:34.170096  991718 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.118777911s)
	I0116 02:23:34.170147  991718 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0116 02:23:34.170255  991718 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:23:34.170260  991718 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=880ec6d67bbc7fe8882aaa6543d5a07427c973fd minikube.k8s.io/name=multinode-835787 minikube.k8s.io/updated_at=2024_01_16T02_23_34_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:23:34.203786  991718 command_runner.go:130] > -16
	I0116 02:23:34.203824  991718 ops.go:34] apiserver oom_adj: -16
	I0116 02:23:34.409219  991718 command_runner.go:130] > node/multinode-835787 labeled
	I0116 02:23:34.409291  991718 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0116 02:23:34.409401  991718 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:23:34.507551  991718 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0116 02:23:34.910389  991718 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:23:35.012007  991718 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0116 02:23:35.409532  991718 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:23:35.522562  991718 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0116 02:23:35.909560  991718 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:23:36.001786  991718 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0116 02:23:36.409458  991718 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:23:36.498449  991718 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0116 02:23:36.910234  991718 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:23:36.997586  991718 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0116 02:23:37.410343  991718 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:23:37.500316  991718 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0116 02:23:37.909466  991718 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:23:37.999945  991718 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0116 02:23:38.410508  991718 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:23:38.504133  991718 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0116 02:23:38.910338  991718 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:23:39.001942  991718 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0116 02:23:39.409511  991718 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:23:39.515051  991718 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0116 02:23:39.910280  991718 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:23:39.997042  991718 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0116 02:23:40.409749  991718 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:23:40.502432  991718 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0116 02:23:40.910120  991718 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:23:41.006783  991718 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0116 02:23:41.409619  991718 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:23:41.519466  991718 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0116 02:23:41.909836  991718 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:23:41.996742  991718 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0116 02:23:42.410236  991718 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:23:42.518488  991718 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0116 02:23:42.909537  991718 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:23:43.001322  991718 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0116 02:23:43.409836  991718 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:23:43.514797  991718 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0116 02:23:43.909494  991718 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:23:43.996463  991718 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0116 02:23:44.410347  991718 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:23:44.503412  991718 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0116 02:23:44.910000  991718 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:23:45.002393  991718 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0116 02:23:45.410113  991718 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:23:45.500005  991718 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0116 02:23:45.910551  991718 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:23:46.154392  991718 command_runner.go:130] > NAME      SECRETS   AGE
	I0116 02:23:46.154426  991718 command_runner.go:130] > default   0         1s
	I0116 02:23:46.155727  991718 kubeadm.go:1088] duration metric: took 11.985546271s to wait for elevateKubeSystemPrivileges.
	I0116 02:23:46.155773  991718 kubeadm.go:406] StartCluster complete in 25.646962319s
	I0116 02:23:46.155818  991718 settings.go:142] acquiring lock: {Name:mk39c69fb5454efd5afab021c02c3bec1f1b4e6b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:23:46.155913  991718 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17967-971255/kubeconfig
	I0116 02:23:46.156627  991718 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17967-971255/kubeconfig: {Name:mk5ca3018274b0a03599cf3631785c0629f81a67 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:23:46.156966  991718 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0116 02:23:46.157024  991718 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0116 02:23:46.157120  991718 addons.go:69] Setting storage-provisioner=true in profile "multinode-835787"
	I0116 02:23:46.157146  991718 addons.go:69] Setting default-storageclass=true in profile "multinode-835787"
	I0116 02:23:46.157202  991718 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-835787"
	I0116 02:23:46.157150  991718 addons.go:234] Setting addon storage-provisioner=true in "multinode-835787"
	I0116 02:23:46.157348  991718 host.go:66] Checking if "multinode-835787" exists ...
	I0116 02:23:46.157721  991718 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 02:23:46.157741  991718 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 02:23:46.157750  991718 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17967-971255/kubeconfig
	I0116 02:23:46.157760  991718 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:23:46.157813  991718 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:23:46.158129  991718 config.go:182] Loaded profile config "multinode-835787": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 02:23:46.158074  991718 kapi.go:59] client config for multinode-835787: &rest.Config{Host:"https://192.168.39.50:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17967-971255/.minikube/profiles/multinode-835787/client.crt", KeyFile:"/home/jenkins/minikube-integration/17967-971255/.minikube/profiles/multinode-835787/client.key", CAFile:"/home/jenkins/minikube-integration/17967-971255/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), N
extProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19be0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0116 02:23:46.158806  991718 cert_rotation.go:137] Starting client certificate rotation controller
	I0116 02:23:46.159230  991718 round_trippers.go:463] GET https://192.168.39.50:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0116 02:23:46.159243  991718 round_trippers.go:469] Request Headers:
	I0116 02:23:46.159251  991718 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:23:46.159261  991718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:23:46.178493  991718 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38635
	I0116 02:23:46.178514  991718 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36339
	I0116 02:23:46.178973  991718 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:23:46.179008  991718 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:23:46.179558  991718 main.go:141] libmachine: Using API Version  1
	I0116 02:23:46.179584  991718 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:23:46.179715  991718 main.go:141] libmachine: Using API Version  1
	I0116 02:23:46.179737  991718 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:23:46.180115  991718 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:23:46.180124  991718 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:23:46.180320  991718 main.go:141] libmachine: (multinode-835787) Calling .GetState
	I0116 02:23:46.180647  991718 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 02:23:46.180688  991718 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:23:46.182862  991718 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17967-971255/kubeconfig
	I0116 02:23:46.183066  991718 kapi.go:59] client config for multinode-835787: &rest.Config{Host:"https://192.168.39.50:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17967-971255/.minikube/profiles/multinode-835787/client.crt", KeyFile:"/home/jenkins/minikube-integration/17967-971255/.minikube/profiles/multinode-835787/client.key", CAFile:"/home/jenkins/minikube-integration/17967-971255/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), N
extProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19be0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0116 02:23:46.183327  991718 addons.go:234] Setting addon default-storageclass=true in "multinode-835787"
	I0116 02:23:46.183359  991718 host.go:66] Checking if "multinode-835787" exists ...
	I0116 02:23:46.183627  991718 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 02:23:46.183674  991718 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:23:46.195880  991718 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38983
	I0116 02:23:46.196378  991718 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:23:46.196915  991718 main.go:141] libmachine: Using API Version  1
	I0116 02:23:46.196945  991718 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:23:46.197312  991718 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:23:46.197543  991718 main.go:141] libmachine: (multinode-835787) Calling .GetState
	I0116 02:23:46.198931  991718 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38097
	I0116 02:23:46.199389  991718 main.go:141] libmachine: (multinode-835787) Calling .DriverName
	I0116 02:23:46.199410  991718 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:23:46.201875  991718 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 02:23:46.199922  991718 main.go:141] libmachine: Using API Version  1
	I0116 02:23:46.201931  991718 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:23:46.203480  991718 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 02:23:46.203499  991718 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0116 02:23:46.203523  991718 main.go:141] libmachine: (multinode-835787) Calling .GetSSHHostname
	I0116 02:23:46.203853  991718 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:23:46.204528  991718 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 02:23:46.204567  991718 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:23:46.207273  991718 main.go:141] libmachine: (multinode-835787) DBG | domain multinode-835787 has defined MAC address 52:54:00:20:87:3c in network mk-multinode-835787
	I0116 02:23:46.207775  991718 main.go:141] libmachine: (multinode-835787) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:87:3c", ip: ""} in network mk-multinode-835787: {Iface:virbr1 ExpiryTime:2024-01-16 03:23:03 +0000 UTC Type:0 Mac:52:54:00:20:87:3c Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:multinode-835787 Clientid:01:52:54:00:20:87:3c}
	I0116 02:23:46.207813  991718 main.go:141] libmachine: (multinode-835787) DBG | domain multinode-835787 has defined IP address 192.168.39.50 and MAC address 52:54:00:20:87:3c in network mk-multinode-835787
	I0116 02:23:46.208090  991718 main.go:141] libmachine: (multinode-835787) Calling .GetSSHPort
	I0116 02:23:46.208334  991718 main.go:141] libmachine: (multinode-835787) Calling .GetSSHKeyPath
	I0116 02:23:46.208540  991718 main.go:141] libmachine: (multinode-835787) Calling .GetSSHUsername
	I0116 02:23:46.208733  991718 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/multinode-835787/id_rsa Username:docker}
	I0116 02:23:46.215216  991718 round_trippers.go:574] Response Status: 200 OK in 55 milliseconds
	I0116 02:23:46.215242  991718 round_trippers.go:577] Response Headers:
	I0116 02:23:46.215253  991718 round_trippers.go:580]     Content-Type: application/json
	I0116 02:23:46.215262  991718 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:23:46.215271  991718 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:23:46.215280  991718 round_trippers.go:580]     Content-Length: 291
	I0116 02:23:46.215307  991718 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:23:46 GMT
	I0116 02:23:46.215316  991718 round_trippers.go:580]     Audit-Id: 19d15861-9415-4f57-a6a8-a9d49e0313f0
	I0116 02:23:46.215324  991718 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:23:46.220883  991718 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40029
	I0116 02:23:46.221310  991718 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:23:46.221900  991718 main.go:141] libmachine: Using API Version  1
	I0116 02:23:46.221957  991718 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:23:46.222374  991718 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:23:46.222564  991718 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"c3d1d02d-1d3d-4837-b3ba-04423f0d8104","resourceVersion":"354","creationTimestamp":"2024-01-16T02:23:32Z"},"spec":{"replicas":2},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0116 02:23:46.222594  991718 main.go:141] libmachine: (multinode-835787) Calling .GetState
	I0116 02:23:46.223238  991718 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"c3d1d02d-1d3d-4837-b3ba-04423f0d8104","resourceVersion":"354","creationTimestamp":"2024-01-16T02:23:32Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0116 02:23:46.223327  991718 round_trippers.go:463] PUT https://192.168.39.50:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0116 02:23:46.223345  991718 round_trippers.go:469] Request Headers:
	I0116 02:23:46.223356  991718 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:23:46.223366  991718 round_trippers.go:473]     Content-Type: application/json
	I0116 02:23:46.223379  991718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:23:46.224280  991718 main.go:141] libmachine: (multinode-835787) Calling .DriverName
	I0116 02:23:46.224552  991718 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0116 02:23:46.224575  991718 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0116 02:23:46.224599  991718 main.go:141] libmachine: (multinode-835787) Calling .GetSSHHostname
	I0116 02:23:46.228060  991718 main.go:141] libmachine: (multinode-835787) DBG | domain multinode-835787 has defined MAC address 52:54:00:20:87:3c in network mk-multinode-835787
	I0116 02:23:46.228567  991718 main.go:141] libmachine: (multinode-835787) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:87:3c", ip: ""} in network mk-multinode-835787: {Iface:virbr1 ExpiryTime:2024-01-16 03:23:03 +0000 UTC Type:0 Mac:52:54:00:20:87:3c Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:multinode-835787 Clientid:01:52:54:00:20:87:3c}
	I0116 02:23:46.228597  991718 main.go:141] libmachine: (multinode-835787) DBG | domain multinode-835787 has defined IP address 192.168.39.50 and MAC address 52:54:00:20:87:3c in network mk-multinode-835787
	I0116 02:23:46.228915  991718 main.go:141] libmachine: (multinode-835787) Calling .GetSSHPort
	I0116 02:23:46.229137  991718 main.go:141] libmachine: (multinode-835787) Calling .GetSSHKeyPath
	I0116 02:23:46.229329  991718 main.go:141] libmachine: (multinode-835787) Calling .GetSSHUsername
	I0116 02:23:46.229517  991718 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/multinode-835787/id_rsa Username:docker}
	I0116 02:23:46.267010  991718 round_trippers.go:574] Response Status: 200 OK in 43 milliseconds
	I0116 02:23:46.267039  991718 round_trippers.go:577] Response Headers:
	I0116 02:23:46.267049  991718 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:23:46 GMT
	I0116 02:23:46.267057  991718 round_trippers.go:580]     Audit-Id: 85b07efa-b06b-4f32-a150-b76ddee54f20
	I0116 02:23:46.267065  991718 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:23:46.267071  991718 round_trippers.go:580]     Content-Type: application/json
	I0116 02:23:46.267078  991718 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:23:46.267085  991718 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:23:46.267092  991718 round_trippers.go:580]     Content-Length: 291
	I0116 02:23:46.267124  991718 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"c3d1d02d-1d3d-4837-b3ba-04423f0d8104","resourceVersion":"390","creationTimestamp":"2024-01-16T02:23:32Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0116 02:23:46.374547  991718 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 02:23:46.413371  991718 command_runner.go:130] > apiVersion: v1
	I0116 02:23:46.413411  991718 command_runner.go:130] > data:
	I0116 02:23:46.413425  991718 command_runner.go:130] >   Corefile: |
	I0116 02:23:46.413430  991718 command_runner.go:130] >     .:53 {
	I0116 02:23:46.413436  991718 command_runner.go:130] >         errors
	I0116 02:23:46.413442  991718 command_runner.go:130] >         health {
	I0116 02:23:46.413449  991718 command_runner.go:130] >            lameduck 5s
	I0116 02:23:46.413454  991718 command_runner.go:130] >         }
	I0116 02:23:46.413460  991718 command_runner.go:130] >         ready
	I0116 02:23:46.413469  991718 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0116 02:23:46.413474  991718 command_runner.go:130] >            pods insecure
	I0116 02:23:46.413481  991718 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0116 02:23:46.413489  991718 command_runner.go:130] >            ttl 30
	I0116 02:23:46.413495  991718 command_runner.go:130] >         }
	I0116 02:23:46.413503  991718 command_runner.go:130] >         prometheus :9153
	I0116 02:23:46.413521  991718 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0116 02:23:46.413530  991718 command_runner.go:130] >            max_concurrent 1000
	I0116 02:23:46.413537  991718 command_runner.go:130] >         }
	I0116 02:23:46.413545  991718 command_runner.go:130] >         cache 30
	I0116 02:23:46.413552  991718 command_runner.go:130] >         loop
	I0116 02:23:46.413579  991718 command_runner.go:130] >         reload
	I0116 02:23:46.413593  991718 command_runner.go:130] >         loadbalance
	I0116 02:23:46.413599  991718 command_runner.go:130] >     }
	I0116 02:23:46.413605  991718 command_runner.go:130] > kind: ConfigMap
	I0116 02:23:46.413613  991718 command_runner.go:130] > metadata:
	I0116 02:23:46.413625  991718 command_runner.go:130] >   creationTimestamp: "2024-01-16T02:23:32Z"
	I0116 02:23:46.413633  991718 command_runner.go:130] >   name: coredns
	I0116 02:23:46.413645  991718 command_runner.go:130] >   namespace: kube-system
	I0116 02:23:46.413656  991718 command_runner.go:130] >   resourceVersion: "266"
	I0116 02:23:46.413666  991718 command_runner.go:130] >   uid: 5d0b97bf-0e87-435e-a3ac-c1a3ea5ab870
	I0116 02:23:46.413829  991718 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0116 02:23:46.551836  991718 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0116 02:23:46.660382  991718 round_trippers.go:463] GET https://192.168.39.50:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0116 02:23:46.660414  991718 round_trippers.go:469] Request Headers:
	I0116 02:23:46.660428  991718 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:23:46.660438  991718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:23:46.690213  991718 round_trippers.go:574] Response Status: 200 OK in 29 milliseconds
	I0116 02:23:46.690249  991718 round_trippers.go:577] Response Headers:
	I0116 02:23:46.690260  991718 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:23:46.690268  991718 round_trippers.go:580]     Content-Length: 291
	I0116 02:23:46.690276  991718 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:23:46 GMT
	I0116 02:23:46.690283  991718 round_trippers.go:580]     Audit-Id: 2c3fce35-efdf-4590-9adc-cad7e767ba78
	I0116 02:23:46.690290  991718 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:23:46.690298  991718 round_trippers.go:580]     Content-Type: application/json
	I0116 02:23:46.690309  991718 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:23:46.690342  991718 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"c3d1d02d-1d3d-4837-b3ba-04423f0d8104","resourceVersion":"401","creationTimestamp":"2024-01-16T02:23:32Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0116 02:23:46.690490  991718 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-835787" context rescaled to 1 replicas
	I0116 02:23:46.690536  991718 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.50 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0116 02:23:46.692996  991718 out.go:177] * Verifying Kubernetes components...
	I0116 02:23:46.694990  991718 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 02:23:47.471753  991718 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0116 02:23:47.471826  991718 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0116 02:23:47.471842  991718 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0116 02:23:47.471854  991718 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0116 02:23:47.471862  991718 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0116 02:23:47.471870  991718 command_runner.go:130] > pod/storage-provisioner created
	I0116 02:23:47.471905  991718 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.097316251s)
	I0116 02:23:47.471959  991718 main.go:141] libmachine: Making call to close driver server
	I0116 02:23:47.471978  991718 main.go:141] libmachine: (multinode-835787) Calling .Close
	I0116 02:23:47.471980  991718 command_runner.go:130] > configmap/coredns replaced
	I0116 02:23:47.472023  991718 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.058172946s)
	I0116 02:23:47.472059  991718 start.go:929] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0116 02:23:47.472070  991718 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0116 02:23:47.472133  991718 main.go:141] libmachine: Making call to close driver server
	I0116 02:23:47.472148  991718 main.go:141] libmachine: (multinode-835787) Calling .Close
	I0116 02:23:47.472325  991718 main.go:141] libmachine: (multinode-835787) DBG | Closing plugin on server side
	I0116 02:23:47.472367  991718 main.go:141] libmachine: Successfully made call to close driver server
	I0116 02:23:47.472378  991718 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 02:23:47.472396  991718 main.go:141] libmachine: Making call to close driver server
	I0116 02:23:47.472407  991718 main.go:141] libmachine: (multinode-835787) Calling .Close
	I0116 02:23:47.472614  991718 main.go:141] libmachine: Successfully made call to close driver server
	I0116 02:23:47.472628  991718 main.go:141] libmachine: (multinode-835787) DBG | Closing plugin on server side
	I0116 02:23:47.472641  991718 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 02:23:47.472652  991718 main.go:141] libmachine: Making call to close driver server
	I0116 02:23:47.472665  991718 main.go:141] libmachine: (multinode-835787) Calling .Close
	I0116 02:23:47.472668  991718 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17967-971255/kubeconfig
	I0116 02:23:47.472913  991718 main.go:141] libmachine: (multinode-835787) DBG | Closing plugin on server side
	I0116 02:23:47.472955  991718 main.go:141] libmachine: Successfully made call to close driver server
	I0116 02:23:47.472974  991718 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 02:23:47.472993  991718 kapi.go:59] client config for multinode-835787: &rest.Config{Host:"https://192.168.39.50:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17967-971255/.minikube/profiles/multinode-835787/client.crt", KeyFile:"/home/jenkins/minikube-integration/17967-971255/.minikube/profiles/multinode-835787/client.key", CAFile:"/home/jenkins/minikube-integration/17967-971255/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), N
extProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19be0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0116 02:23:47.473086  991718 round_trippers.go:463] GET https://192.168.39.50:8443/apis/storage.k8s.io/v1/storageclasses
	I0116 02:23:47.473098  991718 round_trippers.go:469] Request Headers:
	I0116 02:23:47.473109  991718 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:23:47.473121  991718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:23:47.473336  991718 node_ready.go:35] waiting up to 6m0s for node "multinode-835787" to be "Ready" ...
	I0116 02:23:47.473477  991718 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-835787
	I0116 02:23:47.473491  991718 round_trippers.go:469] Request Headers:
	I0116 02:23:47.473501  991718 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:23:47.473507  991718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:23:47.474059  991718 main.go:141] libmachine: Successfully made call to close driver server
	I0116 02:23:47.474069  991718 main.go:141] libmachine: (multinode-835787) DBG | Closing plugin on server side
	I0116 02:23:47.474075  991718 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 02:23:47.486504  991718 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0116 02:23:47.486532  991718 round_trippers.go:577] Response Headers:
	I0116 02:23:47.486540  991718 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:23:47.486546  991718 round_trippers.go:580]     Content-Type: application/json
	I0116 02:23:47.486551  991718 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:23:47.486559  991718 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:23:47.486567  991718 round_trippers.go:580]     Content-Length: 1273
	I0116 02:23:47.486577  991718 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:23:47 GMT
	I0116 02:23:47.486586  991718 round_trippers.go:580]     Audit-Id: 3b49fa14-d682-4a39-86cf-b4f78dcf1f7e
	I0116 02:23:47.486751  991718 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0116 02:23:47.486780  991718 round_trippers.go:577] Response Headers:
	I0116 02:23:47.486791  991718 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:23:47.486801  991718 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:23:47 GMT
	I0116 02:23:47.486814  991718 round_trippers.go:580]     Audit-Id: 4d01bfcc-0e60-4563-b565-ff59359699fd
	I0116 02:23:47.486824  991718 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:23:47.486835  991718 round_trippers.go:580]     Content-Type: application/json
	I0116 02:23:47.486848  991718 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:23:47.487802  991718 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"411"},"items":[{"metadata":{"name":"standard","uid":"c43373fd-2366-49cf-9d2f-92eb4ea71b18","resourceVersion":"403","creationTimestamp":"2024-01-16T02:23:47Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-01-16T02:23:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0116 02:23:47.488235  991718 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-835787","uid":"7ae74749-584e-4cbc-92c4-0e5f2539761e","resourceVersion":"348","creationTimestamp":"2024-01-16T02:23:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-835787","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-835787","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_23_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:23:29Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I0116 02:23:47.488389  991718 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"c43373fd-2366-49cf-9d2f-92eb4ea71b18","resourceVersion":"403","creationTimestamp":"2024-01-16T02:23:47Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-01-16T02:23:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0116 02:23:47.488469  991718 round_trippers.go:463] PUT https://192.168.39.50:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0116 02:23:47.488482  991718 round_trippers.go:469] Request Headers:
	I0116 02:23:47.488494  991718 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:23:47.488504  991718 round_trippers.go:473]     Content-Type: application/json
	I0116 02:23:47.488516  991718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:23:47.491771  991718 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 02:23:47.491796  991718 round_trippers.go:577] Response Headers:
	I0116 02:23:47.491806  991718 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:23:47.491814  991718 round_trippers.go:580]     Content-Length: 1220
	I0116 02:23:47.491821  991718 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:23:47 GMT
	I0116 02:23:47.491829  991718 round_trippers.go:580]     Audit-Id: 21862a16-1564-40fd-a09d-09fcfe3459c1
	I0116 02:23:47.491836  991718 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:23:47.491845  991718 round_trippers.go:580]     Content-Type: application/json
	I0116 02:23:47.491858  991718 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:23:47.491899  991718 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"c43373fd-2366-49cf-9d2f-92eb4ea71b18","resourceVersion":"403","creationTimestamp":"2024-01-16T02:23:47Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-01-16T02:23:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0116 02:23:47.492041  991718 main.go:141] libmachine: Making call to close driver server
	I0116 02:23:47.492059  991718 main.go:141] libmachine: (multinode-835787) Calling .Close
	I0116 02:23:47.492394  991718 main.go:141] libmachine: Successfully made call to close driver server
	I0116 02:23:47.492502  991718 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 02:23:47.492455  991718 main.go:141] libmachine: (multinode-835787) DBG | Closing plugin on server side
	I0116 02:23:47.494906  991718 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0116 02:23:47.496438  991718 addons.go:505] enable addons completed in 1.33941542s: enabled=[storage-provisioner default-storageclass]
	I0116 02:23:47.974021  991718 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-835787
	I0116 02:23:47.974053  991718 round_trippers.go:469] Request Headers:
	I0116 02:23:47.974062  991718 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:23:47.974069  991718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:23:47.978144  991718 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0116 02:23:47.978180  991718 round_trippers.go:577] Response Headers:
	I0116 02:23:47.978193  991718 round_trippers.go:580]     Content-Type: application/json
	I0116 02:23:47.978202  991718 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:23:47.978211  991718 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:23:47.978224  991718 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:23:47 GMT
	I0116 02:23:47.978239  991718 round_trippers.go:580]     Audit-Id: 5741ad90-cd2a-41f9-b340-bd6b7d18185d
	I0116 02:23:47.978249  991718 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:23:47.978393  991718 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-835787","uid":"7ae74749-584e-4cbc-92c4-0e5f2539761e","resourceVersion":"348","creationTimestamp":"2024-01-16T02:23:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-835787","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-835787","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_23_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:23:29Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I0116 02:23:48.474007  991718 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-835787
	I0116 02:23:48.474040  991718 round_trippers.go:469] Request Headers:
	I0116 02:23:48.474049  991718 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:23:48.474055  991718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:23:48.477283  991718 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 02:23:48.477317  991718 round_trippers.go:577] Response Headers:
	I0116 02:23:48.477329  991718 round_trippers.go:580]     Audit-Id: 237fbf91-238c-4dd5-86f6-3b3406838cbe
	I0116 02:23:48.477337  991718 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:23:48.477343  991718 round_trippers.go:580]     Content-Type: application/json
	I0116 02:23:48.477348  991718 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:23:48.477353  991718 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:23:48.477359  991718 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:23:48 GMT
	I0116 02:23:48.477512  991718 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-835787","uid":"7ae74749-584e-4cbc-92c4-0e5f2539761e","resourceVersion":"348","creationTimestamp":"2024-01-16T02:23:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-835787","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-835787","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_23_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:23:29Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I0116 02:23:48.974262  991718 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-835787
	I0116 02:23:48.974290  991718 round_trippers.go:469] Request Headers:
	I0116 02:23:48.974306  991718 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:23:48.974313  991718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:23:48.977327  991718 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:23:48.977351  991718 round_trippers.go:577] Response Headers:
	I0116 02:23:48.977359  991718 round_trippers.go:580]     Audit-Id: df396e5d-1992-4ed7-8b63-9d5895511691
	I0116 02:23:48.977365  991718 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:23:48.977371  991718 round_trippers.go:580]     Content-Type: application/json
	I0116 02:23:48.977376  991718 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:23:48.977381  991718 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:23:48.977386  991718 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:23:48 GMT
	I0116 02:23:48.977622  991718 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-835787","uid":"7ae74749-584e-4cbc-92c4-0e5f2539761e","resourceVersion":"348","creationTimestamp":"2024-01-16T02:23:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-835787","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-835787","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_23_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:23:29Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I0116 02:23:49.474407  991718 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-835787
	I0116 02:23:49.474445  991718 round_trippers.go:469] Request Headers:
	I0116 02:23:49.474458  991718 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:23:49.474475  991718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:23:49.477257  991718 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:23:49.477280  991718 round_trippers.go:577] Response Headers:
	I0116 02:23:49.477288  991718 round_trippers.go:580]     Audit-Id: 92e81921-5acd-4f8b-8f41-2ae3625d3c9e
	I0116 02:23:49.477294  991718 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:23:49.477299  991718 round_trippers.go:580]     Content-Type: application/json
	I0116 02:23:49.477304  991718 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:23:49.477310  991718 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:23:49.477317  991718 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:23:49 GMT
	I0116 02:23:49.477480  991718 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-835787","uid":"7ae74749-584e-4cbc-92c4-0e5f2539761e","resourceVersion":"348","creationTimestamp":"2024-01-16T02:23:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-835787","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-835787","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_23_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:23:29Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I0116 02:23:49.477847  991718 node_ready.go:58] node "multinode-835787" has status "Ready":"False"
	I0116 02:23:49.973717  991718 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-835787
	I0116 02:23:49.973749  991718 round_trippers.go:469] Request Headers:
	I0116 02:23:49.973758  991718 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:23:49.973766  991718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:23:49.976784  991718 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:23:49.976809  991718 round_trippers.go:577] Response Headers:
	I0116 02:23:49.976816  991718 round_trippers.go:580]     Audit-Id: ae0fce69-1124-45d8-9b12-441ccfd5fa4d
	I0116 02:23:49.976822  991718 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:23:49.976827  991718 round_trippers.go:580]     Content-Type: application/json
	I0116 02:23:49.976835  991718 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:23:49.976841  991718 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:23:49.976847  991718 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:23:49 GMT
	I0116 02:23:49.977050  991718 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-835787","uid":"7ae74749-584e-4cbc-92c4-0e5f2539761e","resourceVersion":"348","creationTimestamp":"2024-01-16T02:23:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-835787","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-835787","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_23_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:23:29Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I0116 02:23:50.473689  991718 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-835787
	I0116 02:23:50.473721  991718 round_trippers.go:469] Request Headers:
	I0116 02:23:50.473732  991718 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:23:50.473741  991718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:23:50.476764  991718 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 02:23:50.476789  991718 round_trippers.go:577] Response Headers:
	I0116 02:23:50.476797  991718 round_trippers.go:580]     Audit-Id: 00277fcb-a61a-4222-8bfa-4725ba1b3a8c
	I0116 02:23:50.476803  991718 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:23:50.476810  991718 round_trippers.go:580]     Content-Type: application/json
	I0116 02:23:50.476819  991718 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:23:50.476828  991718 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:23:50.476837  991718 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:23:50 GMT
	I0116 02:23:50.477187  991718 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-835787","uid":"7ae74749-584e-4cbc-92c4-0e5f2539761e","resourceVersion":"348","creationTimestamp":"2024-01-16T02:23:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-835787","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-835787","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_23_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:23:29Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I0116 02:23:50.973834  991718 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-835787
	I0116 02:23:50.973868  991718 round_trippers.go:469] Request Headers:
	I0116 02:23:50.973878  991718 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:23:50.973884  991718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:23:50.976910  991718 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 02:23:50.976939  991718 round_trippers.go:577] Response Headers:
	I0116 02:23:50.976949  991718 round_trippers.go:580]     Audit-Id: 61469efb-739d-4de8-8cd8-00f521c10e62
	I0116 02:23:50.976957  991718 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:23:50.976964  991718 round_trippers.go:580]     Content-Type: application/json
	I0116 02:23:50.976971  991718 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:23:50.976978  991718 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:23:50.976990  991718 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:23:50 GMT
	I0116 02:23:50.977273  991718 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-835787","uid":"7ae74749-584e-4cbc-92c4-0e5f2539761e","resourceVersion":"348","creationTimestamp":"2024-01-16T02:23:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-835787","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-835787","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_23_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:23:29Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I0116 02:23:51.473605  991718 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-835787
	I0116 02:23:51.473638  991718 round_trippers.go:469] Request Headers:
	I0116 02:23:51.473647  991718 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:23:51.473657  991718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:23:51.476688  991718 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 02:23:51.476730  991718 round_trippers.go:577] Response Headers:
	I0116 02:23:51.476743  991718 round_trippers.go:580]     Audit-Id: 00196bf3-a3a4-4e7f-abcd-d32c6423a6fa
	I0116 02:23:51.476753  991718 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:23:51.476759  991718 round_trippers.go:580]     Content-Type: application/json
	I0116 02:23:51.476764  991718 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:23:51.476769  991718 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:23:51.476774  991718 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:23:51 GMT
	I0116 02:23:51.477135  991718 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-835787","uid":"7ae74749-584e-4cbc-92c4-0e5f2539761e","resourceVersion":"348","creationTimestamp":"2024-01-16T02:23:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-835787","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-835787","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_23_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:23:29Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I0116 02:23:51.973788  991718 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-835787
	I0116 02:23:51.973845  991718 round_trippers.go:469] Request Headers:
	I0116 02:23:51.973858  991718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:23:51.973869  991718 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:23:51.977101  991718 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 02:23:51.977130  991718 round_trippers.go:577] Response Headers:
	I0116 02:23:51.977140  991718 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:23:51.977167  991718 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:23:51 GMT
	I0116 02:23:51.977174  991718 round_trippers.go:580]     Audit-Id: 021dab88-3fb4-44a9-90f8-ec309da93615
	I0116 02:23:51.977182  991718 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:23:51.977189  991718 round_trippers.go:580]     Content-Type: application/json
	I0116 02:23:51.977196  991718 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:23:51.977465  991718 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-835787","uid":"7ae74749-584e-4cbc-92c4-0e5f2539761e","resourceVersion":"427","creationTimestamp":"2024-01-16T02:23:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-835787","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-835787","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_23_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:23:29Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I0116 02:23:51.977955  991718 node_ready.go:49] node "multinode-835787" has status "Ready":"True"
	I0116 02:23:51.977984  991718 node_ready.go:38] duration metric: took 4.504607599s waiting for node "multinode-835787" to be "Ready" ...
	I0116 02:23:51.978000  991718 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 02:23:51.978105  991718 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods
	I0116 02:23:51.978126  991718 round_trippers.go:469] Request Headers:
	I0116 02:23:51.978139  991718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:23:51.978147  991718 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:23:51.983004  991718 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0116 02:23:51.983028  991718 round_trippers.go:577] Response Headers:
	I0116 02:23:51.983038  991718 round_trippers.go:580]     Audit-Id: 9fe1b2c2-1061-43db-95a7-10acc28e57d7
	I0116 02:23:51.983046  991718 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:23:51.983052  991718 round_trippers.go:580]     Content-Type: application/json
	I0116 02:23:51.983066  991718 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:23:51.983074  991718 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:23:51.983083  991718 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:23:51 GMT
	I0116 02:23:51.984121  991718 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"433"},"items":[{"metadata":{"name":"coredns-5dd5756b68-965sn","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"a0898f09-1a64-4beb-bfbf-de15f2e07038","resourceVersion":"431","creationTimestamp":"2024-01-16T02:23:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a765b12d-1df2-4d20-a0f7-7471371f2fe7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:23:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a765b12d-1df2-4d20-a0f7-7471371f2fe7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54778 chars]
	I0116 02:23:51.987119  991718 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-965sn" in "kube-system" namespace to be "Ready" ...
	I0116 02:23:51.987232  991718 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-965sn
	I0116 02:23:51.987244  991718 round_trippers.go:469] Request Headers:
	I0116 02:23:51.987255  991718 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:23:51.987265  991718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:23:51.990023  991718 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:23:51.990046  991718 round_trippers.go:577] Response Headers:
	I0116 02:23:51.990053  991718 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:23:51.990060  991718 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:23:51 GMT
	I0116 02:23:51.990065  991718 round_trippers.go:580]     Audit-Id: c643f0b9-7d0e-4981-872a-8e14c4b75a72
	I0116 02:23:51.990070  991718 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:23:51.990075  991718 round_trippers.go:580]     Content-Type: application/json
	I0116 02:23:51.990082  991718 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:23:51.990258  991718 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-965sn","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"a0898f09-1a64-4beb-bfbf-de15f2e07038","resourceVersion":"431","creationTimestamp":"2024-01-16T02:23:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a765b12d-1df2-4d20-a0f7-7471371f2fe7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:23:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a765b12d-1df2-4d20-a0f7-7471371f2fe7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I0116 02:23:51.990700  991718 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-835787
	I0116 02:23:51.990717  991718 round_trippers.go:469] Request Headers:
	I0116 02:23:51.990727  991718 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:23:51.990737  991718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:23:51.993358  991718 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:23:51.993374  991718 round_trippers.go:577] Response Headers:
	I0116 02:23:51.993381  991718 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:23:51 GMT
	I0116 02:23:51.993387  991718 round_trippers.go:580]     Audit-Id: 9cc1dd2a-556b-4e1e-8485-f681371c22c4
	I0116 02:23:51.993395  991718 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:23:51.993403  991718 round_trippers.go:580]     Content-Type: application/json
	I0116 02:23:51.993412  991718 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:23:51.993420  991718 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:23:51.993870  991718 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-835787","uid":"7ae74749-584e-4cbc-92c4-0e5f2539761e","resourceVersion":"427","creationTimestamp":"2024-01-16T02:23:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-835787","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-835787","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_23_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:23:29Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I0116 02:23:52.487583  991718 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-965sn
	I0116 02:23:52.487637  991718 round_trippers.go:469] Request Headers:
	I0116 02:23:52.487650  991718 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:23:52.487660  991718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:23:52.491646  991718 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 02:23:52.491673  991718 round_trippers.go:577] Response Headers:
	I0116 02:23:52.491683  991718 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:23:52.491691  991718 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:23:52 GMT
	I0116 02:23:52.491698  991718 round_trippers.go:580]     Audit-Id: 30aa2166-b492-46ab-8959-cdd3a6f80b5f
	I0116 02:23:52.491705  991718 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:23:52.491712  991718 round_trippers.go:580]     Content-Type: application/json
	I0116 02:23:52.491718  991718 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:23:52.491838  991718 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-965sn","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"a0898f09-1a64-4beb-bfbf-de15f2e07038","resourceVersion":"431","creationTimestamp":"2024-01-16T02:23:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a765b12d-1df2-4d20-a0f7-7471371f2fe7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:23:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a765b12d-1df2-4d20-a0f7-7471371f2fe7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I0116 02:23:52.492315  991718 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-835787
	I0116 02:23:52.492334  991718 round_trippers.go:469] Request Headers:
	I0116 02:23:52.492344  991718 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:23:52.492352  991718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:23:52.496350  991718 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 02:23:52.496369  991718 round_trippers.go:577] Response Headers:
	I0116 02:23:52.496375  991718 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:23:52.496381  991718 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:23:52.496386  991718 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:23:52 GMT
	I0116 02:23:52.496391  991718 round_trippers.go:580]     Audit-Id: 65742c5b-544d-47fc-b7bd-359cb142b4d1
	I0116 02:23:52.496399  991718 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:23:52.496406  991718 round_trippers.go:580]     Content-Type: application/json
	I0116 02:23:52.496729  991718 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-835787","uid":"7ae74749-584e-4cbc-92c4-0e5f2539761e","resourceVersion":"427","creationTimestamp":"2024-01-16T02:23:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-835787","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-835787","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_23_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:23:29Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I0116 02:23:52.987365  991718 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-965sn
	I0116 02:23:52.987390  991718 round_trippers.go:469] Request Headers:
	I0116 02:23:52.987399  991718 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:23:52.987405  991718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:23:52.991186  991718 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 02:23:52.991220  991718 round_trippers.go:577] Response Headers:
	I0116 02:23:52.991230  991718 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:23:52 GMT
	I0116 02:23:52.991238  991718 round_trippers.go:580]     Audit-Id: e36883b2-e4d9-4b8d-9fd2-e823a4ebad9b
	I0116 02:23:52.991246  991718 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:23:52.991253  991718 round_trippers.go:580]     Content-Type: application/json
	I0116 02:23:52.991264  991718 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:23:52.991271  991718 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:23:52.991566  991718 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-965sn","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"a0898f09-1a64-4beb-bfbf-de15f2e07038","resourceVersion":"431","creationTimestamp":"2024-01-16T02:23:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a765b12d-1df2-4d20-a0f7-7471371f2fe7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:23:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a765b12d-1df2-4d20-a0f7-7471371f2fe7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I0116 02:23:52.992165  991718 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-835787
	I0116 02:23:52.992185  991718 round_trippers.go:469] Request Headers:
	I0116 02:23:52.992198  991718 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:23:52.992214  991718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:23:53.002535  991718 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0116 02:23:53.002571  991718 round_trippers.go:577] Response Headers:
	I0116 02:23:53.002582  991718 round_trippers.go:580]     Audit-Id: 32672464-0a93-43c5-9cc1-ad710c23ac02
	I0116 02:23:53.002589  991718 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:23:53.002596  991718 round_trippers.go:580]     Content-Type: application/json
	I0116 02:23:53.002604  991718 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:23:53.002612  991718 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:23:53.002620  991718 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:23:52 GMT
	I0116 02:23:53.002785  991718 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-835787","uid":"7ae74749-584e-4cbc-92c4-0e5f2539761e","resourceVersion":"427","creationTimestamp":"2024-01-16T02:23:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-835787","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-835787","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_23_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:23:29Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I0116 02:23:53.488342  991718 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-965sn
	I0116 02:23:53.488380  991718 round_trippers.go:469] Request Headers:
	I0116 02:23:53.488394  991718 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:23:53.488403  991718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:23:53.491867  991718 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 02:23:53.491900  991718 round_trippers.go:577] Response Headers:
	I0116 02:23:53.491912  991718 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:23:53.491921  991718 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:23:53.491927  991718 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:23:53 GMT
	I0116 02:23:53.491932  991718 round_trippers.go:580]     Audit-Id: 57320704-a615-4bfc-afbc-3066ccea6843
	I0116 02:23:53.491937  991718 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:23:53.491942  991718 round_trippers.go:580]     Content-Type: application/json
	I0116 02:23:53.492133  991718 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-965sn","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"a0898f09-1a64-4beb-bfbf-de15f2e07038","resourceVersion":"448","creationTimestamp":"2024-01-16T02:23:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a765b12d-1df2-4d20-a0f7-7471371f2fe7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:23:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a765b12d-1df2-4d20-a0f7-7471371f2fe7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6493 chars]
	I0116 02:23:53.492630  991718 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-835787
	I0116 02:23:53.492658  991718 round_trippers.go:469] Request Headers:
	I0116 02:23:53.492670  991718 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:23:53.492678  991718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:23:53.495440  991718 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:23:53.495465  991718 round_trippers.go:577] Response Headers:
	I0116 02:23:53.495475  991718 round_trippers.go:580]     Audit-Id: 9009de2c-eb1b-449b-9b16-1e7f6335f991
	I0116 02:23:53.495483  991718 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:23:53.495492  991718 round_trippers.go:580]     Content-Type: application/json
	I0116 02:23:53.495500  991718 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:23:53.495508  991718 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:23:53.495515  991718 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:23:53 GMT
	I0116 02:23:53.495671  991718 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-835787","uid":"7ae74749-584e-4cbc-92c4-0e5f2539761e","resourceVersion":"427","creationTimestamp":"2024-01-16T02:23:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-835787","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-835787","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_23_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:23:29Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I0116 02:23:53.987408  991718 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-965sn
	I0116 02:23:53.987439  991718 round_trippers.go:469] Request Headers:
	I0116 02:23:53.987450  991718 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:23:53.987458  991718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:23:53.990243  991718 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:23:53.990267  991718 round_trippers.go:577] Response Headers:
	I0116 02:23:53.990275  991718 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:23:53.990283  991718 round_trippers.go:580]     Content-Type: application/json
	I0116 02:23:53.990293  991718 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:23:53.990301  991718 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:23:53.990311  991718 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:23:53 GMT
	I0116 02:23:53.990325  991718 round_trippers.go:580]     Audit-Id: 911ff784-57d6-4889-bb9d-55779056ab20
	I0116 02:23:53.990560  991718 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-965sn","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"a0898f09-1a64-4beb-bfbf-de15f2e07038","resourceVersion":"448","creationTimestamp":"2024-01-16T02:23:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a765b12d-1df2-4d20-a0f7-7471371f2fe7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:23:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a765b12d-1df2-4d20-a0f7-7471371f2fe7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6493 chars]
	I0116 02:23:53.991166  991718 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-835787
	I0116 02:23:53.991281  991718 round_trippers.go:469] Request Headers:
	I0116 02:23:53.991311  991718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:23:53.991326  991718 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:23:53.993680  991718 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:23:53.993704  991718 round_trippers.go:577] Response Headers:
	I0116 02:23:53.993715  991718 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:23:53.993730  991718 round_trippers.go:580]     Content-Type: application/json
	I0116 02:23:53.993739  991718 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:23:53.993748  991718 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:23:53.993757  991718 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:23:53 GMT
	I0116 02:23:53.993766  991718 round_trippers.go:580]     Audit-Id: d2cec217-9041-4ee6-b343-91101832e5a5
	I0116 02:23:53.994070  991718 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-835787","uid":"7ae74749-584e-4cbc-92c4-0e5f2539761e","resourceVersion":"427","creationTimestamp":"2024-01-16T02:23:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-835787","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-835787","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_23_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:23:29Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I0116 02:23:53.994497  991718 pod_ready.go:102] pod "coredns-5dd5756b68-965sn" in "kube-system" namespace has status "Ready":"False"
	I0116 02:23:54.487502  991718 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-965sn
	I0116 02:23:54.487533  991718 round_trippers.go:469] Request Headers:
	I0116 02:23:54.487541  991718 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:23:54.487547  991718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:23:54.490848  991718 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 02:23:54.490878  991718 round_trippers.go:577] Response Headers:
	I0116 02:23:54.490886  991718 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:23:54.490891  991718 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:23:54.490896  991718 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:23:54 GMT
	I0116 02:23:54.490901  991718 round_trippers.go:580]     Audit-Id: 58676a92-f9b2-44ac-808c-495a88bf42c8
	I0116 02:23:54.490906  991718 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:23:54.490911  991718 round_trippers.go:580]     Content-Type: application/json
	I0116 02:23:54.491499  991718 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-965sn","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"a0898f09-1a64-4beb-bfbf-de15f2e07038","resourceVersion":"452","creationTimestamp":"2024-01-16T02:23:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a765b12d-1df2-4d20-a0f7-7471371f2fe7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:23:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a765b12d-1df2-4d20-a0f7-7471371f2fe7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6264 chars]
	I0116 02:23:54.492074  991718 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-835787
	I0116 02:23:54.492095  991718 round_trippers.go:469] Request Headers:
	I0116 02:23:54.492107  991718 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:23:54.492117  991718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:23:54.494626  991718 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:23:54.494646  991718 round_trippers.go:577] Response Headers:
	I0116 02:23:54.494653  991718 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:23:54.494658  991718 round_trippers.go:580]     Content-Type: application/json
	I0116 02:23:54.494663  991718 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:23:54.494668  991718 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:23:54.494673  991718 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:23:54 GMT
	I0116 02:23:54.494678  991718 round_trippers.go:580]     Audit-Id: 29b17ec3-41c4-45da-8078-be37d3a34a00
	I0116 02:23:54.495028  991718 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-835787","uid":"7ae74749-584e-4cbc-92c4-0e5f2539761e","resourceVersion":"427","creationTimestamp":"2024-01-16T02:23:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-835787","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-835787","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_23_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:23:29Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I0116 02:23:54.495342  991718 pod_ready.go:92] pod "coredns-5dd5756b68-965sn" in "kube-system" namespace has status "Ready":"True"
	I0116 02:23:54.495361  991718 pod_ready.go:81] duration metric: took 2.508213901s waiting for pod "coredns-5dd5756b68-965sn" in "kube-system" namespace to be "Ready" ...
	I0116 02:23:54.495370  991718 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-835787" in "kube-system" namespace to be "Ready" ...
	I0116 02:23:54.495433  991718 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-835787
	I0116 02:23:54.495442  991718 round_trippers.go:469] Request Headers:
	I0116 02:23:54.495449  991718 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:23:54.495455  991718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:23:54.498006  991718 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:23:54.498028  991718 round_trippers.go:577] Response Headers:
	I0116 02:23:54.498039  991718 round_trippers.go:580]     Audit-Id: 0b115fb8-ca28-4d4f-ac31-c1a5fad5bbad
	I0116 02:23:54.498048  991718 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:23:54.498056  991718 round_trippers.go:580]     Content-Type: application/json
	I0116 02:23:54.498062  991718 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:23:54.498067  991718 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:23:54.498072  991718 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:23:54 GMT
	I0116 02:23:54.498196  991718 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-835787","namespace":"kube-system","uid":"ccb51de1-d565-42b0-bd30-9b1acb1c725d","resourceVersion":"443","creationTimestamp":"2024-01-16T02:23:33Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.50:2379","kubernetes.io/config.hash":"108085f55363e386b9f9c083ac579444","kubernetes.io/config.mirror":"108085f55363e386b9f9c083ac579444","kubernetes.io/config.seen":"2024-01-16T02:23:33.032941198Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-835787","uid":"7ae74749-584e-4cbc-92c4-0e5f2539761e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:23:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 5843 chars]
	I0116 02:23:54.498665  991718 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-835787
	I0116 02:23:54.498684  991718 round_trippers.go:469] Request Headers:
	I0116 02:23:54.498695  991718 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:23:54.498701  991718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:23:54.501107  991718 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:23:54.501121  991718 round_trippers.go:577] Response Headers:
	I0116 02:23:54.501127  991718 round_trippers.go:580]     Audit-Id: 25714662-b624-4714-9b4b-01a4efdbbb5e
	I0116 02:23:54.501132  991718 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:23:54.501137  991718 round_trippers.go:580]     Content-Type: application/json
	I0116 02:23:54.501144  991718 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:23:54.501151  991718 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:23:54.501159  991718 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:23:54 GMT
	I0116 02:23:54.501307  991718 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-835787","uid":"7ae74749-584e-4cbc-92c4-0e5f2539761e","resourceVersion":"427","creationTimestamp":"2024-01-16T02:23:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-835787","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-835787","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_23_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:23:29Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I0116 02:23:54.501573  991718 pod_ready.go:92] pod "etcd-multinode-835787" in "kube-system" namespace has status "Ready":"True"
	I0116 02:23:54.501587  991718 pod_ready.go:81] duration metric: took 6.211984ms waiting for pod "etcd-multinode-835787" in "kube-system" namespace to be "Ready" ...
	I0116 02:23:54.501598  991718 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-835787" in "kube-system" namespace to be "Ready" ...
	I0116 02:23:54.501658  991718 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-835787
	I0116 02:23:54.501665  991718 round_trippers.go:469] Request Headers:
	I0116 02:23:54.501672  991718 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:23:54.501680  991718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:23:54.504059  991718 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:23:54.504075  991718 round_trippers.go:577] Response Headers:
	I0116 02:23:54.504081  991718 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:23:54.504086  991718 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:23:54.504091  991718 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:23:54 GMT
	I0116 02:23:54.504096  991718 round_trippers.go:580]     Audit-Id: 0b16768e-021a-411a-b278-e836632282a8
	I0116 02:23:54.504103  991718 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:23:54.504112  991718 round_trippers.go:580]     Content-Type: application/json
	I0116 02:23:54.504654  991718 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-835787","namespace":"kube-system","uid":"9c26db11-7208-4540-8a73-407a6edd3a0b","resourceVersion":"444","creationTimestamp":"2024-01-16T02:23:33Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.50:8443","kubernetes.io/config.hash":"b27880b6b81ca11dc023b4901941ff6f","kubernetes.io/config.mirror":"b27880b6b81ca11dc023b4901941ff6f","kubernetes.io/config.seen":"2024-01-16T02:23:33.032945135Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-835787","uid":"7ae74749-584e-4cbc-92c4-0e5f2539761e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:23:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7380 chars]
	I0116 02:23:54.505099  991718 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-835787
	I0116 02:23:54.505115  991718 round_trippers.go:469] Request Headers:
	I0116 02:23:54.505123  991718 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:23:54.505133  991718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:23:54.507048  991718 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0116 02:23:54.507065  991718 round_trippers.go:577] Response Headers:
	I0116 02:23:54.507071  991718 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:23:54.507076  991718 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:23:54 GMT
	I0116 02:23:54.507081  991718 round_trippers.go:580]     Audit-Id: d42810dd-73c5-402b-b333-7a8e8082fbc5
	I0116 02:23:54.507086  991718 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:23:54.507091  991718 round_trippers.go:580]     Content-Type: application/json
	I0116 02:23:54.507097  991718 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:23:54.507226  991718 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-835787","uid":"7ae74749-584e-4cbc-92c4-0e5f2539761e","resourceVersion":"427","creationTimestamp":"2024-01-16T02:23:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-835787","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-835787","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_23_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:23:29Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I0116 02:23:54.507481  991718 pod_ready.go:92] pod "kube-apiserver-multinode-835787" in "kube-system" namespace has status "Ready":"True"
	I0116 02:23:54.507495  991718 pod_ready.go:81] duration metric: took 5.886299ms waiting for pod "kube-apiserver-multinode-835787" in "kube-system" namespace to be "Ready" ...
	I0116 02:23:54.507504  991718 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-835787" in "kube-system" namespace to be "Ready" ...
	I0116 02:23:54.507549  991718 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-835787
	I0116 02:23:54.507557  991718 round_trippers.go:469] Request Headers:
	I0116 02:23:54.507564  991718 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:23:54.507570  991718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:23:54.509372  991718 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0116 02:23:54.509390  991718 round_trippers.go:577] Response Headers:
	I0116 02:23:54.509400  991718 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:23:54.509407  991718 round_trippers.go:580]     Content-Type: application/json
	I0116 02:23:54.509415  991718 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:23:54.509424  991718 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:23:54.509431  991718 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:23:54 GMT
	I0116 02:23:54.509437  991718 round_trippers.go:580]     Audit-Id: 1b046b48-7edd-4c6b-961e-e438011ee619
	I0116 02:23:54.509610  991718 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-835787","namespace":"kube-system","uid":"daf9e312-54ad-4a4e-b334-9b84e55f8fef","resourceVersion":"445","creationTimestamp":"2024-01-16T02:23:33Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"6adb137abb6e7ac4dcf8e50e41a3773b","kubernetes.io/config.mirror":"6adb137abb6e7ac4dcf8e50e41a3773b","kubernetes.io/config.seen":"2024-01-16T02:23:33.032946146Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-835787","uid":"7ae74749-584e-4cbc-92c4-0e5f2539761e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:23:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6950 chars]
	I0116 02:23:54.509984  991718 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-835787
	I0116 02:23:54.509997  991718 round_trippers.go:469] Request Headers:
	I0116 02:23:54.510003  991718 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:23:54.510009  991718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:23:54.511856  991718 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0116 02:23:54.511872  991718 round_trippers.go:577] Response Headers:
	I0116 02:23:54.511880  991718 round_trippers.go:580]     Audit-Id: 3b7f8f81-5147-4744-87be-a868715e746f
	I0116 02:23:54.511888  991718 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:23:54.511895  991718 round_trippers.go:580]     Content-Type: application/json
	I0116 02:23:54.511904  991718 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:23:54.511912  991718 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:23:54.511918  991718 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:23:54 GMT
	I0116 02:23:54.512103  991718 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-835787","uid":"7ae74749-584e-4cbc-92c4-0e5f2539761e","resourceVersion":"427","creationTimestamp":"2024-01-16T02:23:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-835787","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-835787","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_23_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:23:29Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I0116 02:23:54.512356  991718 pod_ready.go:92] pod "kube-controller-manager-multinode-835787" in "kube-system" namespace has status "Ready":"True"
	I0116 02:23:54.512371  991718 pod_ready.go:81] duration metric: took 4.859129ms waiting for pod "kube-controller-manager-multinode-835787" in "kube-system" namespace to be "Ready" ...
	I0116 02:23:54.512379  991718 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-gbvc2" in "kube-system" namespace to be "Ready" ...
	I0116 02:23:54.512419  991718 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gbvc2
	I0116 02:23:54.512427  991718 round_trippers.go:469] Request Headers:
	I0116 02:23:54.512433  991718 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:23:54.512440  991718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:23:54.514401  991718 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0116 02:23:54.514419  991718 round_trippers.go:577] Response Headers:
	I0116 02:23:54.514427  991718 round_trippers.go:580]     Audit-Id: c3135134-5e81-415c-be9a-d6a0ebab2b66
	I0116 02:23:54.514435  991718 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:23:54.514441  991718 round_trippers.go:580]     Content-Type: application/json
	I0116 02:23:54.514449  991718 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:23:54.514457  991718 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:23:54.514465  991718 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:23:54 GMT
	I0116 02:23:54.514664  991718 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-gbvc2","generateName":"kube-proxy-","namespace":"kube-system","uid":"74d63696-cb46-484d-937b-8883e6f1df06","resourceVersion":"416","creationTimestamp":"2024-01-16T02:23:45Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"c1b2dbfd-aa15-48f4-b587-dbae275fb5c5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:23:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c1b2dbfd-aa15-48f4-b587-dbae275fb5c5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5514 chars]
	I0116 02:23:54.515157  991718 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-835787
	I0116 02:23:54.515178  991718 round_trippers.go:469] Request Headers:
	I0116 02:23:54.515188  991718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:23:54.515198  991718 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:23:54.517069  991718 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0116 02:23:54.517084  991718 round_trippers.go:577] Response Headers:
	I0116 02:23:54.517089  991718 round_trippers.go:580]     Content-Type: application/json
	I0116 02:23:54.517095  991718 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:23:54.517100  991718 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:23:54.517105  991718 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:23:54 GMT
	I0116 02:23:54.517109  991718 round_trippers.go:580]     Audit-Id: bc7cc85b-cddf-46f4-9cdc-f114628fd82b
	I0116 02:23:54.517114  991718 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:23:54.517368  991718 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-835787","uid":"7ae74749-584e-4cbc-92c4-0e5f2539761e","resourceVersion":"427","creationTimestamp":"2024-01-16T02:23:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-835787","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-835787","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_23_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:23:29Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I0116 02:23:54.517760  991718 pod_ready.go:92] pod "kube-proxy-gbvc2" in "kube-system" namespace has status "Ready":"True"
	I0116 02:23:54.517781  991718 pod_ready.go:81] duration metric: took 5.395266ms waiting for pod "kube-proxy-gbvc2" in "kube-system" namespace to be "Ready" ...
	I0116 02:23:54.517794  991718 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-835787" in "kube-system" namespace to be "Ready" ...
	I0116 02:23:54.688271  991718 request.go:629] Waited for 170.394505ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-835787
	I0116 02:23:54.688362  991718 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-835787
	I0116 02:23:54.688368  991718 round_trippers.go:469] Request Headers:
	I0116 02:23:54.688376  991718 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:23:54.688383  991718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:23:54.691828  991718 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 02:23:54.691859  991718 round_trippers.go:577] Response Headers:
	I0116 02:23:54.691871  991718 round_trippers.go:580]     Audit-Id: c8a7df03-a933-4e1f-8491-433b77b51ad3
	I0116 02:23:54.691880  991718 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:23:54.691889  991718 round_trippers.go:580]     Content-Type: application/json
	I0116 02:23:54.691895  991718 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:23:54.691900  991718 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:23:54.691905  991718 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:23:54 GMT
	I0116 02:23:54.692063  991718 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-835787","namespace":"kube-system","uid":"7b9c28cc-6e78-413a-af72-511714d2462e","resourceVersion":"442","creationTimestamp":"2024-01-16T02:23:33Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"230f2dad53142209ac2ae48ed27aa7b4","kubernetes.io/config.mirror":"230f2dad53142209ac2ae48ed27aa7b4","kubernetes.io/config.seen":"2024-01-16T02:23:33.032947019Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-835787","uid":"7ae74749-584e-4cbc-92c4-0e5f2539761e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:23:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4680 chars]
	I0116 02:23:54.887939  991718 request.go:629] Waited for 195.384104ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.50:8443/api/v1/nodes/multinode-835787
	I0116 02:23:54.888020  991718 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-835787
	I0116 02:23:54.888025  991718 round_trippers.go:469] Request Headers:
	I0116 02:23:54.888033  991718 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:23:54.888039  991718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:23:54.891004  991718 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:23:54.891028  991718 round_trippers.go:577] Response Headers:
	I0116 02:23:54.891036  991718 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:23:54.891041  991718 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:23:54.891047  991718 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:23:54 GMT
	I0116 02:23:54.891052  991718 round_trippers.go:580]     Audit-Id: 4d1f1265-3efc-4806-abdc-725e18d0bc9c
	I0116 02:23:54.891057  991718 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:23:54.891062  991718 round_trippers.go:580]     Content-Type: application/json
	I0116 02:23:54.891249  991718 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-835787","uid":"7ae74749-584e-4cbc-92c4-0e5f2539761e","resourceVersion":"427","creationTimestamp":"2024-01-16T02:23:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-835787","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-835787","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_23_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:23:29Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I0116 02:23:54.891574  991718 pod_ready.go:92] pod "kube-scheduler-multinode-835787" in "kube-system" namespace has status "Ready":"True"
	I0116 02:23:54.891592  991718 pod_ready.go:81] duration metric: took 373.791487ms waiting for pod "kube-scheduler-multinode-835787" in "kube-system" namespace to be "Ready" ...
	I0116 02:23:54.891604  991718 pod_ready.go:38] duration metric: took 2.913585403s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 02:23:54.891620  991718 api_server.go:52] waiting for apiserver process to appear ...
	I0116 02:23:54.891677  991718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 02:23:54.905486  991718 command_runner.go:130] > 1131
	I0116 02:23:54.905553  991718 api_server.go:72] duration metric: took 8.214979456s to wait for apiserver process to appear ...
	I0116 02:23:54.905567  991718 api_server.go:88] waiting for apiserver healthz status ...
	I0116 02:23:54.905597  991718 api_server.go:253] Checking apiserver healthz at https://192.168.39.50:8443/healthz ...
	I0116 02:23:54.912305  991718 api_server.go:279] https://192.168.39.50:8443/healthz returned 200:
	ok
	I0116 02:23:54.912392  991718 round_trippers.go:463] GET https://192.168.39.50:8443/version
	I0116 02:23:54.912401  991718 round_trippers.go:469] Request Headers:
	I0116 02:23:54.912412  991718 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:23:54.912422  991718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:23:54.913555  991718 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0116 02:23:54.913575  991718 round_trippers.go:577] Response Headers:
	I0116 02:23:54.913585  991718 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:23:54.913592  991718 round_trippers.go:580]     Content-Length: 264
	I0116 02:23:54.913600  991718 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:23:54 GMT
	I0116 02:23:54.913611  991718 round_trippers.go:580]     Audit-Id: d17dd99f-3f88-4fd8-a1b3-5b916008c314
	I0116 02:23:54.913619  991718 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:23:54.913628  991718 round_trippers.go:580]     Content-Type: application/json
	I0116 02:23:54.913633  991718 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:23:54.913671  991718 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0116 02:23:54.913773  991718 api_server.go:141] control plane version: v1.28.4
	I0116 02:23:54.913792  991718 api_server.go:131] duration metric: took 8.218592ms to wait for apiserver health ...
	I0116 02:23:54.913815  991718 system_pods.go:43] waiting for kube-system pods to appear ...
	I0116 02:23:55.088282  991718 request.go:629] Waited for 174.374082ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods
	I0116 02:23:55.088362  991718 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods
	I0116 02:23:55.088371  991718 round_trippers.go:469] Request Headers:
	I0116 02:23:55.088384  991718 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:23:55.088391  991718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:23:55.092346  991718 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 02:23:55.092373  991718 round_trippers.go:577] Response Headers:
	I0116 02:23:55.092381  991718 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:23:55.092387  991718 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:23:55 GMT
	I0116 02:23:55.092392  991718 round_trippers.go:580]     Audit-Id: 86fe6697-ff24-42dc-8da7-e2debf208b2b
	I0116 02:23:55.092398  991718 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:23:55.092403  991718 round_trippers.go:580]     Content-Type: application/json
	I0116 02:23:55.092408  991718 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:23:55.093318  991718 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"457"},"items":[{"metadata":{"name":"coredns-5dd5756b68-965sn","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"a0898f09-1a64-4beb-bfbf-de15f2e07038","resourceVersion":"452","creationTimestamp":"2024-01-16T02:23:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a765b12d-1df2-4d20-a0f7-7471371f2fe7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:23:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a765b12d-1df2-4d20-a0f7-7471371f2fe7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 53956 chars]
	I0116 02:23:55.095330  991718 system_pods.go:59] 8 kube-system pods found
	I0116 02:23:55.095368  991718 system_pods.go:61] "coredns-5dd5756b68-965sn" [a0898f09-1a64-4beb-bfbf-de15f2e07038] Running
	I0116 02:23:55.095378  991718 system_pods.go:61] "etcd-multinode-835787" [ccb51de1-d565-42b0-bd30-9b1acb1c725d] Running
	I0116 02:23:55.095382  991718 system_pods.go:61] "kindnet-755b9" [ee1ea8c4-abfe-4fea-9f71-32840f6790ed] Running
	I0116 02:23:55.095390  991718 system_pods.go:61] "kube-apiserver-multinode-835787" [9c26db11-7208-4540-8a73-407a6edd3a0b] Running
	I0116 02:23:55.095395  991718 system_pods.go:61] "kube-controller-manager-multinode-835787" [daf9e312-54ad-4a4e-b334-9b84e55f8fef] Running
	I0116 02:23:55.095402  991718 system_pods.go:61] "kube-proxy-gbvc2" [74d63696-cb46-484d-937b-8883e6f1df06] Running
	I0116 02:23:55.095406  991718 system_pods.go:61] "kube-scheduler-multinode-835787" [7b9c28cc-6e78-413a-af72-511714d2462e] Running
	I0116 02:23:55.095410  991718 system_pods.go:61] "storage-provisioner" [2d18fde8-ca44-4257-8475-100cd8b34ef8] Running
	I0116 02:23:55.095417  991718 system_pods.go:74] duration metric: took 181.593721ms to wait for pod list to return data ...
	I0116 02:23:55.095425  991718 default_sa.go:34] waiting for default service account to be created ...
	I0116 02:23:55.287938  991718 request.go:629] Waited for 192.410629ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.50:8443/api/v1/namespaces/default/serviceaccounts
	I0116 02:23:55.288034  991718 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/namespaces/default/serviceaccounts
	I0116 02:23:55.288044  991718 round_trippers.go:469] Request Headers:
	I0116 02:23:55.288061  991718 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:23:55.288073  991718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:23:55.291838  991718 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 02:23:55.291872  991718 round_trippers.go:577] Response Headers:
	I0116 02:23:55.291887  991718 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:23:55 GMT
	I0116 02:23:55.291893  991718 round_trippers.go:580]     Audit-Id: 2bc9525e-e9e7-4062-aaa9-7a8d1856a3d0
	I0116 02:23:55.291898  991718 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:23:55.291904  991718 round_trippers.go:580]     Content-Type: application/json
	I0116 02:23:55.291909  991718 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:23:55.291914  991718 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:23:55.291920  991718 round_trippers.go:580]     Content-Length: 261
	I0116 02:23:55.291946  991718 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"457"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"4cc2ba47-febe-498a-9316-a228f833a1cc","resourceVersion":"346","creationTimestamp":"2024-01-16T02:23:45Z"}}]}
	I0116 02:23:55.292287  991718 default_sa.go:45] found service account: "default"
	I0116 02:23:55.292331  991718 default_sa.go:55] duration metric: took 196.896365ms for default service account to be created ...
	I0116 02:23:55.292352  991718 system_pods.go:116] waiting for k8s-apps to be running ...
	I0116 02:23:55.487842  991718 request.go:629] Waited for 195.380856ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods
	I0116 02:23:55.487910  991718 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods
	I0116 02:23:55.487916  991718 round_trippers.go:469] Request Headers:
	I0116 02:23:55.487924  991718 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:23:55.487931  991718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:23:55.492438  991718 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0116 02:23:55.492468  991718 round_trippers.go:577] Response Headers:
	I0116 02:23:55.492479  991718 round_trippers.go:580]     Audit-Id: a1ba6d44-7be9-473a-b718-204487dbce67
	I0116 02:23:55.492489  991718 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:23:55.492497  991718 round_trippers.go:580]     Content-Type: application/json
	I0116 02:23:55.492504  991718 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:23:55.492509  991718 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:23:55.492515  991718 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:23:55 GMT
	I0116 02:23:55.493555  991718 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"457"},"items":[{"metadata":{"name":"coredns-5dd5756b68-965sn","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"a0898f09-1a64-4beb-bfbf-de15f2e07038","resourceVersion":"452","creationTimestamp":"2024-01-16T02:23:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a765b12d-1df2-4d20-a0f7-7471371f2fe7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:23:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a765b12d-1df2-4d20-a0f7-7471371f2fe7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 53956 chars]
	I0116 02:23:55.495229  991718 system_pods.go:86] 8 kube-system pods found
	I0116 02:23:55.495253  991718 system_pods.go:89] "coredns-5dd5756b68-965sn" [a0898f09-1a64-4beb-bfbf-de15f2e07038] Running
	I0116 02:23:55.495258  991718 system_pods.go:89] "etcd-multinode-835787" [ccb51de1-d565-42b0-bd30-9b1acb1c725d] Running
	I0116 02:23:55.495262  991718 system_pods.go:89] "kindnet-755b9" [ee1ea8c4-abfe-4fea-9f71-32840f6790ed] Running
	I0116 02:23:55.495266  991718 system_pods.go:89] "kube-apiserver-multinode-835787" [9c26db11-7208-4540-8a73-407a6edd3a0b] Running
	I0116 02:23:55.495271  991718 system_pods.go:89] "kube-controller-manager-multinode-835787" [daf9e312-54ad-4a4e-b334-9b84e55f8fef] Running
	I0116 02:23:55.495276  991718 system_pods.go:89] "kube-proxy-gbvc2" [74d63696-cb46-484d-937b-8883e6f1df06] Running
	I0116 02:23:55.495281  991718 system_pods.go:89] "kube-scheduler-multinode-835787" [7b9c28cc-6e78-413a-af72-511714d2462e] Running
	I0116 02:23:55.495285  991718 system_pods.go:89] "storage-provisioner" [2d18fde8-ca44-4257-8475-100cd8b34ef8] Running
	I0116 02:23:55.495295  991718 system_pods.go:126] duration metric: took 202.935695ms to wait for k8s-apps to be running ...
	I0116 02:23:55.495302  991718 system_svc.go:44] waiting for kubelet service to be running ....
	I0116 02:23:55.495352  991718 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 02:23:55.510771  991718 system_svc.go:56] duration metric: took 15.457532ms WaitForService to wait for kubelet.
	I0116 02:23:55.510806  991718 kubeadm.go:581] duration metric: took 8.820231693s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0116 02:23:55.510828  991718 node_conditions.go:102] verifying NodePressure condition ...
	I0116 02:23:55.688283  991718 request.go:629] Waited for 177.347213ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.50:8443/api/v1/nodes
	I0116 02:23:55.688345  991718 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes
	I0116 02:23:55.688351  991718 round_trippers.go:469] Request Headers:
	I0116 02:23:55.688358  991718 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:23:55.688365  991718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:23:55.691514  991718 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 02:23:55.691547  991718 round_trippers.go:577] Response Headers:
	I0116 02:23:55.691554  991718 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:23:55.691561  991718 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:23:55 GMT
	I0116 02:23:55.691566  991718 round_trippers.go:580]     Audit-Id: ace8c067-eef1-41f6-adce-70000a58ca98
	I0116 02:23:55.691571  991718 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:23:55.691577  991718 round_trippers.go:580]     Content-Type: application/json
	I0116 02:23:55.691583  991718 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:23:55.691976  991718 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"457"},"items":[{"metadata":{"name":"multinode-835787","uid":"7ae74749-584e-4cbc-92c4-0e5f2539761e","resourceVersion":"427","creationTimestamp":"2024-01-16T02:23:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-835787","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-835787","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_23_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 5951 chars]
	I0116 02:23:55.692391  991718 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 02:23:55.692419  991718 node_conditions.go:123] node cpu capacity is 2
	I0116 02:23:55.692428  991718 node_conditions.go:105] duration metric: took 181.595906ms to run NodePressure ...
	I0116 02:23:55.692440  991718 start.go:228] waiting for startup goroutines ...
	I0116 02:23:55.692446  991718 start.go:233] waiting for cluster config update ...
	I0116 02:23:55.692456  991718 start.go:242] writing updated cluster config ...
	I0116 02:23:55.695015  991718 out.go:177] 
	I0116 02:23:55.696814  991718 config.go:182] Loaded profile config "multinode-835787": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 02:23:55.696894  991718 profile.go:148] Saving config to /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/multinode-835787/config.json ...
	I0116 02:23:55.699106  991718 out.go:177] * Starting worker node multinode-835787-m02 in cluster multinode-835787
	I0116 02:23:55.700665  991718 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0116 02:23:55.700703  991718 cache.go:56] Caching tarball of preloaded images
	I0116 02:23:55.700835  991718 preload.go:174] Found /home/jenkins/minikube-integration/17967-971255/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0116 02:23:55.700851  991718 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0116 02:23:55.700933  991718 profile.go:148] Saving config to /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/multinode-835787/config.json ...
	I0116 02:23:55.701116  991718 start.go:365] acquiring machines lock for multinode-835787-m02: {Name:mk61734672272adcae1bdaf20e72828e69d219db Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0116 02:23:55.701184  991718 start.go:369] acquired machines lock for "multinode-835787-m02" in 46.2µs
	I0116 02:23:55.701210  991718 start.go:93] Provisioning new machine with config: &{Name:multinode-835787 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.28.4 ClusterName:multinode-835787 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.50 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:t
rue ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name:m02 IP: Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0116 02:23:55.701284  991718 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0116 02:23:55.702956  991718 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0116 02:23:55.703056  991718 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 02:23:55.703086  991718 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:23:55.719323  991718 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42093
	I0116 02:23:55.719811  991718 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:23:55.720270  991718 main.go:141] libmachine: Using API Version  1
	I0116 02:23:55.720295  991718 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:23:55.720635  991718 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:23:55.720833  991718 main.go:141] libmachine: (multinode-835787-m02) Calling .GetMachineName
	I0116 02:23:55.720993  991718 main.go:141] libmachine: (multinode-835787-m02) Calling .DriverName
	I0116 02:23:55.721187  991718 start.go:159] libmachine.API.Create for "multinode-835787" (driver="kvm2")
	I0116 02:23:55.721210  991718 client.go:168] LocalClient.Create starting
	I0116 02:23:55.721244  991718 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca.pem
	I0116 02:23:55.721286  991718 main.go:141] libmachine: Decoding PEM data...
	I0116 02:23:55.721302  991718 main.go:141] libmachine: Parsing certificate...
	I0116 02:23:55.721375  991718 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17967-971255/.minikube/certs/cert.pem
	I0116 02:23:55.721404  991718 main.go:141] libmachine: Decoding PEM data...
	I0116 02:23:55.721422  991718 main.go:141] libmachine: Parsing certificate...
	I0116 02:23:55.721447  991718 main.go:141] libmachine: Running pre-create checks...
	I0116 02:23:55.721461  991718 main.go:141] libmachine: (multinode-835787-m02) Calling .PreCreateCheck
	I0116 02:23:55.721649  991718 main.go:141] libmachine: (multinode-835787-m02) Calling .GetConfigRaw
	I0116 02:23:55.722076  991718 main.go:141] libmachine: Creating machine...
	I0116 02:23:55.722093  991718 main.go:141] libmachine: (multinode-835787-m02) Calling .Create
	I0116 02:23:55.722226  991718 main.go:141] libmachine: (multinode-835787-m02) Creating KVM machine...
	I0116 02:23:55.723525  991718 main.go:141] libmachine: (multinode-835787-m02) DBG | found existing default KVM network
	I0116 02:23:55.723667  991718 main.go:141] libmachine: (multinode-835787-m02) DBG | found existing private KVM network mk-multinode-835787
	I0116 02:23:55.723826  991718 main.go:141] libmachine: (multinode-835787-m02) Setting up store path in /home/jenkins/minikube-integration/17967-971255/.minikube/machines/multinode-835787-m02 ...
	I0116 02:23:55.723846  991718 main.go:141] libmachine: (multinode-835787-m02) Building disk image from file:///home/jenkins/minikube-integration/17967-971255/.minikube/cache/iso/amd64/minikube-v1.32.1-1703784139-17866-amd64.iso
	I0116 02:23:55.723949  991718 main.go:141] libmachine: (multinode-835787-m02) DBG | I0116 02:23:55.723812  992079 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17967-971255/.minikube
	I0116 02:23:55.724059  991718 main.go:141] libmachine: (multinode-835787-m02) Downloading /home/jenkins/minikube-integration/17967-971255/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17967-971255/.minikube/cache/iso/amd64/minikube-v1.32.1-1703784139-17866-amd64.iso...
	I0116 02:23:55.957585  991718 main.go:141] libmachine: (multinode-835787-m02) DBG | I0116 02:23:55.957453  992079 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17967-971255/.minikube/machines/multinode-835787-m02/id_rsa...
	I0116 02:23:56.120910  991718 main.go:141] libmachine: (multinode-835787-m02) DBG | I0116 02:23:56.120720  992079 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17967-971255/.minikube/machines/multinode-835787-m02/multinode-835787-m02.rawdisk...
	I0116 02:23:56.120955  991718 main.go:141] libmachine: (multinode-835787-m02) DBG | Writing magic tar header
	I0116 02:23:56.121017  991718 main.go:141] libmachine: (multinode-835787-m02) DBG | Writing SSH key tar header
	I0116 02:23:56.121044  991718 main.go:141] libmachine: (multinode-835787-m02) DBG | I0116 02:23:56.120888  992079 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17967-971255/.minikube/machines/multinode-835787-m02 ...
	I0116 02:23:56.121087  991718 main.go:141] libmachine: (multinode-835787-m02) Setting executable bit set on /home/jenkins/minikube-integration/17967-971255/.minikube/machines/multinode-835787-m02 (perms=drwx------)
	I0116 02:23:56.121113  991718 main.go:141] libmachine: (multinode-835787-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17967-971255/.minikube/machines/multinode-835787-m02
	I0116 02:23:56.121129  991718 main.go:141] libmachine: (multinode-835787-m02) Setting executable bit set on /home/jenkins/minikube-integration/17967-971255/.minikube/machines (perms=drwxr-xr-x)
	I0116 02:23:56.121160  991718 main.go:141] libmachine: (multinode-835787-m02) Setting executable bit set on /home/jenkins/minikube-integration/17967-971255/.minikube (perms=drwxr-xr-x)
	I0116 02:23:56.121189  991718 main.go:141] libmachine: (multinode-835787-m02) Setting executable bit set on /home/jenkins/minikube-integration/17967-971255 (perms=drwxrwxr-x)
	I0116 02:23:56.121197  991718 main.go:141] libmachine: (multinode-835787-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17967-971255/.minikube/machines
	I0116 02:23:56.121208  991718 main.go:141] libmachine: (multinode-835787-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17967-971255/.minikube
	I0116 02:23:56.121215  991718 main.go:141] libmachine: (multinode-835787-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0116 02:23:56.121222  991718 main.go:141] libmachine: (multinode-835787-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17967-971255
	I0116 02:23:56.121238  991718 main.go:141] libmachine: (multinode-835787-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0116 02:23:56.121251  991718 main.go:141] libmachine: (multinode-835787-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0116 02:23:56.121257  991718 main.go:141] libmachine: (multinode-835787-m02) DBG | Checking permissions on dir: /home/jenkins
	I0116 02:23:56.121265  991718 main.go:141] libmachine: (multinode-835787-m02) DBG | Checking permissions on dir: /home
	I0116 02:23:56.121274  991718 main.go:141] libmachine: (multinode-835787-m02) DBG | Skipping /home - not owner
	I0116 02:23:56.121284  991718 main.go:141] libmachine: (multinode-835787-m02) Creating domain...
	I0116 02:23:56.122756  991718 main.go:141] libmachine: (multinode-835787-m02) define libvirt domain using xml: 
	I0116 02:23:56.122787  991718 main.go:141] libmachine: (multinode-835787-m02) <domain type='kvm'>
	I0116 02:23:56.122800  991718 main.go:141] libmachine: (multinode-835787-m02)   <name>multinode-835787-m02</name>
	I0116 02:23:56.122814  991718 main.go:141] libmachine: (multinode-835787-m02)   <memory unit='MiB'>2200</memory>
	I0116 02:23:56.122834  991718 main.go:141] libmachine: (multinode-835787-m02)   <vcpu>2</vcpu>
	I0116 02:23:56.122843  991718 main.go:141] libmachine: (multinode-835787-m02)   <features>
	I0116 02:23:56.122856  991718 main.go:141] libmachine: (multinode-835787-m02)     <acpi/>
	I0116 02:23:56.122864  991718 main.go:141] libmachine: (multinode-835787-m02)     <apic/>
	I0116 02:23:56.122877  991718 main.go:141] libmachine: (multinode-835787-m02)     <pae/>
	I0116 02:23:56.122888  991718 main.go:141] libmachine: (multinode-835787-m02)     
	I0116 02:23:56.122901  991718 main.go:141] libmachine: (multinode-835787-m02)   </features>
	I0116 02:23:56.122911  991718 main.go:141] libmachine: (multinode-835787-m02)   <cpu mode='host-passthrough'>
	I0116 02:23:56.122921  991718 main.go:141] libmachine: (multinode-835787-m02)   
	I0116 02:23:56.122929  991718 main.go:141] libmachine: (multinode-835787-m02)   </cpu>
	I0116 02:23:56.122939  991718 main.go:141] libmachine: (multinode-835787-m02)   <os>
	I0116 02:23:56.122952  991718 main.go:141] libmachine: (multinode-835787-m02)     <type>hvm</type>
	I0116 02:23:56.122966  991718 main.go:141] libmachine: (multinode-835787-m02)     <boot dev='cdrom'/>
	I0116 02:23:56.122978  991718 main.go:141] libmachine: (multinode-835787-m02)     <boot dev='hd'/>
	I0116 02:23:56.122990  991718 main.go:141] libmachine: (multinode-835787-m02)     <bootmenu enable='no'/>
	I0116 02:23:56.123009  991718 main.go:141] libmachine: (multinode-835787-m02)   </os>
	I0116 02:23:56.123070  991718 main.go:141] libmachine: (multinode-835787-m02)   <devices>
	I0116 02:23:56.123106  991718 main.go:141] libmachine: (multinode-835787-m02)     <disk type='file' device='cdrom'>
	I0116 02:23:56.123124  991718 main.go:141] libmachine: (multinode-835787-m02)       <source file='/home/jenkins/minikube-integration/17967-971255/.minikube/machines/multinode-835787-m02/boot2docker.iso'/>
	I0116 02:23:56.123138  991718 main.go:141] libmachine: (multinode-835787-m02)       <target dev='hdc' bus='scsi'/>
	I0116 02:23:56.123153  991718 main.go:141] libmachine: (multinode-835787-m02)       <readonly/>
	I0116 02:23:56.123165  991718 main.go:141] libmachine: (multinode-835787-m02)     </disk>
	I0116 02:23:56.123176  991718 main.go:141] libmachine: (multinode-835787-m02)     <disk type='file' device='disk'>
	I0116 02:23:56.123191  991718 main.go:141] libmachine: (multinode-835787-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0116 02:23:56.123210  991718 main.go:141] libmachine: (multinode-835787-m02)       <source file='/home/jenkins/minikube-integration/17967-971255/.minikube/machines/multinode-835787-m02/multinode-835787-m02.rawdisk'/>
	I0116 02:23:56.123225  991718 main.go:141] libmachine: (multinode-835787-m02)       <target dev='hda' bus='virtio'/>
	I0116 02:23:56.123237  991718 main.go:141] libmachine: (multinode-835787-m02)     </disk>
	I0116 02:23:56.123251  991718 main.go:141] libmachine: (multinode-835787-m02)     <interface type='network'>
	I0116 02:23:56.123264  991718 main.go:141] libmachine: (multinode-835787-m02)       <source network='mk-multinode-835787'/>
	I0116 02:23:56.123278  991718 main.go:141] libmachine: (multinode-835787-m02)       <model type='virtio'/>
	I0116 02:23:56.123290  991718 main.go:141] libmachine: (multinode-835787-m02)     </interface>
	I0116 02:23:56.123304  991718 main.go:141] libmachine: (multinode-835787-m02)     <interface type='network'>
	I0116 02:23:56.123317  991718 main.go:141] libmachine: (multinode-835787-m02)       <source network='default'/>
	I0116 02:23:56.123330  991718 main.go:141] libmachine: (multinode-835787-m02)       <model type='virtio'/>
	I0116 02:23:56.123338  991718 main.go:141] libmachine: (multinode-835787-m02)     </interface>
	I0116 02:23:56.123345  991718 main.go:141] libmachine: (multinode-835787-m02)     <serial type='pty'>
	I0116 02:23:56.123356  991718 main.go:141] libmachine: (multinode-835787-m02)       <target port='0'/>
	I0116 02:23:56.123378  991718 main.go:141] libmachine: (multinode-835787-m02)     </serial>
	I0116 02:23:56.123399  991718 main.go:141] libmachine: (multinode-835787-m02)     <console type='pty'>
	I0116 02:23:56.123415  991718 main.go:141] libmachine: (multinode-835787-m02)       <target type='serial' port='0'/>
	I0116 02:23:56.123427  991718 main.go:141] libmachine: (multinode-835787-m02)     </console>
	I0116 02:23:56.123441  991718 main.go:141] libmachine: (multinode-835787-m02)     <rng model='virtio'>
	I0116 02:23:56.123455  991718 main.go:141] libmachine: (multinode-835787-m02)       <backend model='random'>/dev/random</backend>
	I0116 02:23:56.123468  991718 main.go:141] libmachine: (multinode-835787-m02)     </rng>
	I0116 02:23:56.123480  991718 main.go:141] libmachine: (multinode-835787-m02)     
	I0116 02:23:56.123492  991718 main.go:141] libmachine: (multinode-835787-m02)     
	I0116 02:23:56.123503  991718 main.go:141] libmachine: (multinode-835787-m02)   </devices>
	I0116 02:23:56.123515  991718 main.go:141] libmachine: (multinode-835787-m02) </domain>
	I0116 02:23:56.123532  991718 main.go:141] libmachine: (multinode-835787-m02) 
	I0116 02:23:56.130832  991718 main.go:141] libmachine: (multinode-835787-m02) DBG | domain multinode-835787-m02 has defined MAC address 52:54:00:c4:93:1a in network default
	I0116 02:23:56.131307  991718 main.go:141] libmachine: (multinode-835787-m02) Ensuring networks are active...
	I0116 02:23:56.131342  991718 main.go:141] libmachine: (multinode-835787-m02) DBG | domain multinode-835787-m02 has defined MAC address 52:54:00:83:d4:5b in network mk-multinode-835787
	I0116 02:23:56.132173  991718 main.go:141] libmachine: (multinode-835787-m02) Ensuring network default is active
	I0116 02:23:56.132565  991718 main.go:141] libmachine: (multinode-835787-m02) Ensuring network mk-multinode-835787 is active
	I0116 02:23:56.132899  991718 main.go:141] libmachine: (multinode-835787-m02) Getting domain xml...
	I0116 02:23:56.133636  991718 main.go:141] libmachine: (multinode-835787-m02) Creating domain...
	I0116 02:23:57.385203  991718 main.go:141] libmachine: (multinode-835787-m02) Waiting to get IP...
	I0116 02:23:57.385987  991718 main.go:141] libmachine: (multinode-835787-m02) DBG | domain multinode-835787-m02 has defined MAC address 52:54:00:83:d4:5b in network mk-multinode-835787
	I0116 02:23:57.386420  991718 main.go:141] libmachine: (multinode-835787-m02) DBG | unable to find current IP address of domain multinode-835787-m02 in network mk-multinode-835787
	I0116 02:23:57.386448  991718 main.go:141] libmachine: (multinode-835787-m02) DBG | I0116 02:23:57.386398  992079 retry.go:31] will retry after 288.72283ms: waiting for machine to come up
	I0116 02:23:57.676976  991718 main.go:141] libmachine: (multinode-835787-m02) DBG | domain multinode-835787-m02 has defined MAC address 52:54:00:83:d4:5b in network mk-multinode-835787
	I0116 02:23:57.677367  991718 main.go:141] libmachine: (multinode-835787-m02) DBG | unable to find current IP address of domain multinode-835787-m02 in network mk-multinode-835787
	I0116 02:23:57.677396  991718 main.go:141] libmachine: (multinode-835787-m02) DBG | I0116 02:23:57.677324  992079 retry.go:31] will retry after 296.000345ms: waiting for machine to come up
	I0116 02:23:57.974978  991718 main.go:141] libmachine: (multinode-835787-m02) DBG | domain multinode-835787-m02 has defined MAC address 52:54:00:83:d4:5b in network mk-multinode-835787
	I0116 02:23:57.975422  991718 main.go:141] libmachine: (multinode-835787-m02) DBG | unable to find current IP address of domain multinode-835787-m02 in network mk-multinode-835787
	I0116 02:23:57.975443  991718 main.go:141] libmachine: (multinode-835787-m02) DBG | I0116 02:23:57.975391  992079 retry.go:31] will retry after 388.930984ms: waiting for machine to come up
	I0116 02:23:58.365769  991718 main.go:141] libmachine: (multinode-835787-m02) DBG | domain multinode-835787-m02 has defined MAC address 52:54:00:83:d4:5b in network mk-multinode-835787
	I0116 02:23:58.366227  991718 main.go:141] libmachine: (multinode-835787-m02) DBG | unable to find current IP address of domain multinode-835787-m02 in network mk-multinode-835787
	I0116 02:23:58.366258  991718 main.go:141] libmachine: (multinode-835787-m02) DBG | I0116 02:23:58.366186  992079 retry.go:31] will retry after 535.168612ms: waiting for machine to come up
	I0116 02:23:58.903070  991718 main.go:141] libmachine: (multinode-835787-m02) DBG | domain multinode-835787-m02 has defined MAC address 52:54:00:83:d4:5b in network mk-multinode-835787
	I0116 02:23:58.903592  991718 main.go:141] libmachine: (multinode-835787-m02) DBG | unable to find current IP address of domain multinode-835787-m02 in network mk-multinode-835787
	I0116 02:23:58.903623  991718 main.go:141] libmachine: (multinode-835787-m02) DBG | I0116 02:23:58.903530  992079 retry.go:31] will retry after 712.508801ms: waiting for machine to come up
	I0116 02:23:59.617408  991718 main.go:141] libmachine: (multinode-835787-m02) DBG | domain multinode-835787-m02 has defined MAC address 52:54:00:83:d4:5b in network mk-multinode-835787
	I0116 02:23:59.617952  991718 main.go:141] libmachine: (multinode-835787-m02) DBG | unable to find current IP address of domain multinode-835787-m02 in network mk-multinode-835787
	I0116 02:23:59.617981  991718 main.go:141] libmachine: (multinode-835787-m02) DBG | I0116 02:23:59.617900  992079 retry.go:31] will retry after 697.728399ms: waiting for machine to come up
	I0116 02:24:00.317507  991718 main.go:141] libmachine: (multinode-835787-m02) DBG | domain multinode-835787-m02 has defined MAC address 52:54:00:83:d4:5b in network mk-multinode-835787
	I0116 02:24:00.318078  991718 main.go:141] libmachine: (multinode-835787-m02) DBG | unable to find current IP address of domain multinode-835787-m02 in network mk-multinode-835787
	I0116 02:24:00.318111  991718 main.go:141] libmachine: (multinode-835787-m02) DBG | I0116 02:24:00.318029  992079 retry.go:31] will retry after 786.765353ms: waiting for machine to come up
	I0116 02:24:01.106587  991718 main.go:141] libmachine: (multinode-835787-m02) DBG | domain multinode-835787-m02 has defined MAC address 52:54:00:83:d4:5b in network mk-multinode-835787
	I0116 02:24:01.106997  991718 main.go:141] libmachine: (multinode-835787-m02) DBG | unable to find current IP address of domain multinode-835787-m02 in network mk-multinode-835787
	I0116 02:24:01.107025  991718 main.go:141] libmachine: (multinode-835787-m02) DBG | I0116 02:24:01.106936  992079 retry.go:31] will retry after 1.050111236s: waiting for machine to come up
	I0116 02:24:02.158974  991718 main.go:141] libmachine: (multinode-835787-m02) DBG | domain multinode-835787-m02 has defined MAC address 52:54:00:83:d4:5b in network mk-multinode-835787
	I0116 02:24:02.159377  991718 main.go:141] libmachine: (multinode-835787-m02) DBG | unable to find current IP address of domain multinode-835787-m02 in network mk-multinode-835787
	I0116 02:24:02.159403  991718 main.go:141] libmachine: (multinode-835787-m02) DBG | I0116 02:24:02.159327  992079 retry.go:31] will retry after 1.372817807s: waiting for machine to come up
	I0116 02:24:03.533964  991718 main.go:141] libmachine: (multinode-835787-m02) DBG | domain multinode-835787-m02 has defined MAC address 52:54:00:83:d4:5b in network mk-multinode-835787
	I0116 02:24:03.534325  991718 main.go:141] libmachine: (multinode-835787-m02) DBG | unable to find current IP address of domain multinode-835787-m02 in network mk-multinode-835787
	I0116 02:24:03.534350  991718 main.go:141] libmachine: (multinode-835787-m02) DBG | I0116 02:24:03.534266  992079 retry.go:31] will retry after 1.707723315s: waiting for machine to come up
	I0116 02:24:05.244376  991718 main.go:141] libmachine: (multinode-835787-m02) DBG | domain multinode-835787-m02 has defined MAC address 52:54:00:83:d4:5b in network mk-multinode-835787
	I0116 02:24:05.244793  991718 main.go:141] libmachine: (multinode-835787-m02) DBG | unable to find current IP address of domain multinode-835787-m02 in network mk-multinode-835787
	I0116 02:24:05.244825  991718 main.go:141] libmachine: (multinode-835787-m02) DBG | I0116 02:24:05.244751  992079 retry.go:31] will retry after 2.466674128s: waiting for machine to come up
	I0116 02:24:07.712646  991718 main.go:141] libmachine: (multinode-835787-m02) DBG | domain multinode-835787-m02 has defined MAC address 52:54:00:83:d4:5b in network mk-multinode-835787
	I0116 02:24:07.713067  991718 main.go:141] libmachine: (multinode-835787-m02) DBG | unable to find current IP address of domain multinode-835787-m02 in network mk-multinode-835787
	I0116 02:24:07.713093  991718 main.go:141] libmachine: (multinode-835787-m02) DBG | I0116 02:24:07.713002  992079 retry.go:31] will retry after 3.287634131s: waiting for machine to come up
	I0116 02:24:11.001838  991718 main.go:141] libmachine: (multinode-835787-m02) DBG | domain multinode-835787-m02 has defined MAC address 52:54:00:83:d4:5b in network mk-multinode-835787
	I0116 02:24:11.002254  991718 main.go:141] libmachine: (multinode-835787-m02) DBG | unable to find current IP address of domain multinode-835787-m02 in network mk-multinode-835787
	I0116 02:24:11.002278  991718 main.go:141] libmachine: (multinode-835787-m02) DBG | I0116 02:24:11.002214  992079 retry.go:31] will retry after 3.14150206s: waiting for machine to come up
	I0116 02:24:14.147583  991718 main.go:141] libmachine: (multinode-835787-m02) DBG | domain multinode-835787-m02 has defined MAC address 52:54:00:83:d4:5b in network mk-multinode-835787
	I0116 02:24:14.148048  991718 main.go:141] libmachine: (multinode-835787-m02) DBG | unable to find current IP address of domain multinode-835787-m02 in network mk-multinode-835787
	I0116 02:24:14.148074  991718 main.go:141] libmachine: (multinode-835787-m02) DBG | I0116 02:24:14.147967  992079 retry.go:31] will retry after 3.542430471s: waiting for machine to come up
	I0116 02:24:17.693214  991718 main.go:141] libmachine: (multinode-835787-m02) DBG | domain multinode-835787-m02 has defined MAC address 52:54:00:83:d4:5b in network mk-multinode-835787
	I0116 02:24:17.693673  991718 main.go:141] libmachine: (multinode-835787-m02) DBG | domain multinode-835787-m02 has current primary IP address 192.168.39.15 and MAC address 52:54:00:83:d4:5b in network mk-multinode-835787
	I0116 02:24:17.693711  991718 main.go:141] libmachine: (multinode-835787-m02) Found IP for machine: 192.168.39.15
	I0116 02:24:17.693735  991718 main.go:141] libmachine: (multinode-835787-m02) Reserving static IP address...
	I0116 02:24:17.694225  991718 main.go:141] libmachine: (multinode-835787-m02) DBG | unable to find host DHCP lease matching {name: "multinode-835787-m02", mac: "52:54:00:83:d4:5b", ip: "192.168.39.15"} in network mk-multinode-835787
	I0116 02:24:17.771937  991718 main.go:141] libmachine: (multinode-835787-m02) Reserved static IP address: 192.168.39.15
	I0116 02:24:17.772000  991718 main.go:141] libmachine: (multinode-835787-m02) Waiting for SSH to be available...
	I0116 02:24:17.772021  991718 main.go:141] libmachine: (multinode-835787-m02) DBG | Getting to WaitForSSH function...
	I0116 02:24:17.774485  991718 main.go:141] libmachine: (multinode-835787-m02) DBG | domain multinode-835787-m02 has defined MAC address 52:54:00:83:d4:5b in network mk-multinode-835787
	I0116 02:24:17.774986  991718 main.go:141] libmachine: (multinode-835787-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:d4:5b", ip: ""} in network mk-multinode-835787: {Iface:virbr1 ExpiryTime:2024-01-16 03:24:11 +0000 UTC Type:0 Mac:52:54:00:83:d4:5b Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:minikube Clientid:01:52:54:00:83:d4:5b}
	I0116 02:24:17.775021  991718 main.go:141] libmachine: (multinode-835787-m02) DBG | domain multinode-835787-m02 has defined IP address 192.168.39.15 and MAC address 52:54:00:83:d4:5b in network mk-multinode-835787
	I0116 02:24:17.775211  991718 main.go:141] libmachine: (multinode-835787-m02) DBG | Using SSH client type: external
	I0116 02:24:17.775246  991718 main.go:141] libmachine: (multinode-835787-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/17967-971255/.minikube/machines/multinode-835787-m02/id_rsa (-rw-------)
	I0116 02:24:17.775284  991718 main.go:141] libmachine: (multinode-835787-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.15 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17967-971255/.minikube/machines/multinode-835787-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0116 02:24:17.775300  991718 main.go:141] libmachine: (multinode-835787-m02) DBG | About to run SSH command:
	I0116 02:24:17.775347  991718 main.go:141] libmachine: (multinode-835787-m02) DBG | exit 0
	I0116 02:24:17.873948  991718 main.go:141] libmachine: (multinode-835787-m02) DBG | SSH cmd err, output: <nil>: 
	I0116 02:24:17.874272  991718 main.go:141] libmachine: (multinode-835787-m02) KVM machine creation complete!
	I0116 02:24:17.874565  991718 main.go:141] libmachine: (multinode-835787-m02) Calling .GetConfigRaw
	I0116 02:24:17.875111  991718 main.go:141] libmachine: (multinode-835787-m02) Calling .DriverName
	I0116 02:24:17.875295  991718 main.go:141] libmachine: (multinode-835787-m02) Calling .DriverName
	I0116 02:24:17.875453  991718 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0116 02:24:17.875473  991718 main.go:141] libmachine: (multinode-835787-m02) Calling .GetState
	I0116 02:24:17.876971  991718 main.go:141] libmachine: Detecting operating system of created instance...
	I0116 02:24:17.876987  991718 main.go:141] libmachine: Waiting for SSH to be available...
	I0116 02:24:17.876993  991718 main.go:141] libmachine: Getting to WaitForSSH function...
	I0116 02:24:17.877000  991718 main.go:141] libmachine: (multinode-835787-m02) Calling .GetSSHHostname
	I0116 02:24:17.879581  991718 main.go:141] libmachine: (multinode-835787-m02) DBG | domain multinode-835787-m02 has defined MAC address 52:54:00:83:d4:5b in network mk-multinode-835787
	I0116 02:24:17.879966  991718 main.go:141] libmachine: (multinode-835787-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:d4:5b", ip: ""} in network mk-multinode-835787: {Iface:virbr1 ExpiryTime:2024-01-16 03:24:11 +0000 UTC Type:0 Mac:52:54:00:83:d4:5b Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:multinode-835787-m02 Clientid:01:52:54:00:83:d4:5b}
	I0116 02:24:17.880006  991718 main.go:141] libmachine: (multinode-835787-m02) DBG | domain multinode-835787-m02 has defined IP address 192.168.39.15 and MAC address 52:54:00:83:d4:5b in network mk-multinode-835787
	I0116 02:24:17.880110  991718 main.go:141] libmachine: (multinode-835787-m02) Calling .GetSSHPort
	I0116 02:24:17.880317  991718 main.go:141] libmachine: (multinode-835787-m02) Calling .GetSSHKeyPath
	I0116 02:24:17.880539  991718 main.go:141] libmachine: (multinode-835787-m02) Calling .GetSSHKeyPath
	I0116 02:24:17.880672  991718 main.go:141] libmachine: (multinode-835787-m02) Calling .GetSSHUsername
	I0116 02:24:17.880824  991718 main.go:141] libmachine: Using SSH client type: native
	I0116 02:24:17.881184  991718 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.15 22 <nil> <nil>}
	I0116 02:24:17.881197  991718 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0116 02:24:18.013451  991718 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0116 02:24:18.013485  991718 main.go:141] libmachine: Detecting the provisioner...
	I0116 02:24:18.013498  991718 main.go:141] libmachine: (multinode-835787-m02) Calling .GetSSHHostname
	I0116 02:24:18.016572  991718 main.go:141] libmachine: (multinode-835787-m02) DBG | domain multinode-835787-m02 has defined MAC address 52:54:00:83:d4:5b in network mk-multinode-835787
	I0116 02:24:18.017090  991718 main.go:141] libmachine: (multinode-835787-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:d4:5b", ip: ""} in network mk-multinode-835787: {Iface:virbr1 ExpiryTime:2024-01-16 03:24:11 +0000 UTC Type:0 Mac:52:54:00:83:d4:5b Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:multinode-835787-m02 Clientid:01:52:54:00:83:d4:5b}
	I0116 02:24:18.017126  991718 main.go:141] libmachine: (multinode-835787-m02) DBG | domain multinode-835787-m02 has defined IP address 192.168.39.15 and MAC address 52:54:00:83:d4:5b in network mk-multinode-835787
	I0116 02:24:18.017319  991718 main.go:141] libmachine: (multinode-835787-m02) Calling .GetSSHPort
	I0116 02:24:18.017546  991718 main.go:141] libmachine: (multinode-835787-m02) Calling .GetSSHKeyPath
	I0116 02:24:18.017754  991718 main.go:141] libmachine: (multinode-835787-m02) Calling .GetSSHKeyPath
	I0116 02:24:18.017973  991718 main.go:141] libmachine: (multinode-835787-m02) Calling .GetSSHUsername
	I0116 02:24:18.018164  991718 main.go:141] libmachine: Using SSH client type: native
	I0116 02:24:18.018668  991718 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.15 22 <nil> <nil>}
	I0116 02:24:18.018682  991718 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0116 02:24:18.155175  991718 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g19d536a-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I0116 02:24:18.155286  991718 main.go:141] libmachine: found compatible host: buildroot
	I0116 02:24:18.155296  991718 main.go:141] libmachine: Provisioning with buildroot...
	I0116 02:24:18.155308  991718 main.go:141] libmachine: (multinode-835787-m02) Calling .GetMachineName
	I0116 02:24:18.155634  991718 buildroot.go:166] provisioning hostname "multinode-835787-m02"
	I0116 02:24:18.155663  991718 main.go:141] libmachine: (multinode-835787-m02) Calling .GetMachineName
	I0116 02:24:18.155972  991718 main.go:141] libmachine: (multinode-835787-m02) Calling .GetSSHHostname
	I0116 02:24:18.158669  991718 main.go:141] libmachine: (multinode-835787-m02) DBG | domain multinode-835787-m02 has defined MAC address 52:54:00:83:d4:5b in network mk-multinode-835787
	I0116 02:24:18.159055  991718 main.go:141] libmachine: (multinode-835787-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:d4:5b", ip: ""} in network mk-multinode-835787: {Iface:virbr1 ExpiryTime:2024-01-16 03:24:11 +0000 UTC Type:0 Mac:52:54:00:83:d4:5b Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:multinode-835787-m02 Clientid:01:52:54:00:83:d4:5b}
	I0116 02:24:18.159087  991718 main.go:141] libmachine: (multinode-835787-m02) DBG | domain multinode-835787-m02 has defined IP address 192.168.39.15 and MAC address 52:54:00:83:d4:5b in network mk-multinode-835787
	I0116 02:24:18.159235  991718 main.go:141] libmachine: (multinode-835787-m02) Calling .GetSSHPort
	I0116 02:24:18.159500  991718 main.go:141] libmachine: (multinode-835787-m02) Calling .GetSSHKeyPath
	I0116 02:24:18.159687  991718 main.go:141] libmachine: (multinode-835787-m02) Calling .GetSSHKeyPath
	I0116 02:24:18.159853  991718 main.go:141] libmachine: (multinode-835787-m02) Calling .GetSSHUsername
	I0116 02:24:18.160039  991718 main.go:141] libmachine: Using SSH client type: native
	I0116 02:24:18.160368  991718 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.15 22 <nil> <nil>}
	I0116 02:24:18.160381  991718 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-835787-m02 && echo "multinode-835787-m02" | sudo tee /etc/hostname
	I0116 02:24:18.312235  991718 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-835787-m02
	
	I0116 02:24:18.312271  991718 main.go:141] libmachine: (multinode-835787-m02) Calling .GetSSHHostname
	I0116 02:24:18.315320  991718 main.go:141] libmachine: (multinode-835787-m02) DBG | domain multinode-835787-m02 has defined MAC address 52:54:00:83:d4:5b in network mk-multinode-835787
	I0116 02:24:18.315714  991718 main.go:141] libmachine: (multinode-835787-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:d4:5b", ip: ""} in network mk-multinode-835787: {Iface:virbr1 ExpiryTime:2024-01-16 03:24:11 +0000 UTC Type:0 Mac:52:54:00:83:d4:5b Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:multinode-835787-m02 Clientid:01:52:54:00:83:d4:5b}
	I0116 02:24:18.315747  991718 main.go:141] libmachine: (multinode-835787-m02) DBG | domain multinode-835787-m02 has defined IP address 192.168.39.15 and MAC address 52:54:00:83:d4:5b in network mk-multinode-835787
	I0116 02:24:18.315898  991718 main.go:141] libmachine: (multinode-835787-m02) Calling .GetSSHPort
	I0116 02:24:18.316127  991718 main.go:141] libmachine: (multinode-835787-m02) Calling .GetSSHKeyPath
	I0116 02:24:18.316315  991718 main.go:141] libmachine: (multinode-835787-m02) Calling .GetSSHKeyPath
	I0116 02:24:18.316486  991718 main.go:141] libmachine: (multinode-835787-m02) Calling .GetSSHUsername
	I0116 02:24:18.316637  991718 main.go:141] libmachine: Using SSH client type: native
	I0116 02:24:18.317134  991718 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.15 22 <nil> <nil>}
	I0116 02:24:18.317163  991718 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-835787-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-835787-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-835787-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0116 02:24:18.457552  991718 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0116 02:24:18.457590  991718 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17967-971255/.minikube CaCertPath:/home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17967-971255/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17967-971255/.minikube}
	I0116 02:24:18.457613  991718 buildroot.go:174] setting up certificates
	I0116 02:24:18.457628  991718 provision.go:83] configureAuth start
	I0116 02:24:18.457640  991718 main.go:141] libmachine: (multinode-835787-m02) Calling .GetMachineName
	I0116 02:24:18.458000  991718 main.go:141] libmachine: (multinode-835787-m02) Calling .GetIP
	I0116 02:24:18.461075  991718 main.go:141] libmachine: (multinode-835787-m02) DBG | domain multinode-835787-m02 has defined MAC address 52:54:00:83:d4:5b in network mk-multinode-835787
	I0116 02:24:18.461464  991718 main.go:141] libmachine: (multinode-835787-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:d4:5b", ip: ""} in network mk-multinode-835787: {Iface:virbr1 ExpiryTime:2024-01-16 03:24:11 +0000 UTC Type:0 Mac:52:54:00:83:d4:5b Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:multinode-835787-m02 Clientid:01:52:54:00:83:d4:5b}
	I0116 02:24:18.461493  991718 main.go:141] libmachine: (multinode-835787-m02) DBG | domain multinode-835787-m02 has defined IP address 192.168.39.15 and MAC address 52:54:00:83:d4:5b in network mk-multinode-835787
	I0116 02:24:18.461677  991718 main.go:141] libmachine: (multinode-835787-m02) Calling .GetSSHHostname
	I0116 02:24:18.464141  991718 main.go:141] libmachine: (multinode-835787-m02) DBG | domain multinode-835787-m02 has defined MAC address 52:54:00:83:d4:5b in network mk-multinode-835787
	I0116 02:24:18.464491  991718 main.go:141] libmachine: (multinode-835787-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:d4:5b", ip: ""} in network mk-multinode-835787: {Iface:virbr1 ExpiryTime:2024-01-16 03:24:11 +0000 UTC Type:0 Mac:52:54:00:83:d4:5b Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:multinode-835787-m02 Clientid:01:52:54:00:83:d4:5b}
	I0116 02:24:18.464523  991718 main.go:141] libmachine: (multinode-835787-m02) DBG | domain multinode-835787-m02 has defined IP address 192.168.39.15 and MAC address 52:54:00:83:d4:5b in network mk-multinode-835787
	I0116 02:24:18.464646  991718 provision.go:138] copyHostCerts
	I0116 02:24:18.464681  991718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17967-971255/.minikube/ca.pem
	I0116 02:24:18.464733  991718 exec_runner.go:144] found /home/jenkins/minikube-integration/17967-971255/.minikube/ca.pem, removing ...
	I0116 02:24:18.464746  991718 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17967-971255/.minikube/ca.pem
	I0116 02:24:18.464828  991718 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17967-971255/.minikube/ca.pem (1082 bytes)
	I0116 02:24:18.464961  991718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17967-971255/.minikube/cert.pem
	I0116 02:24:18.464989  991718 exec_runner.go:144] found /home/jenkins/minikube-integration/17967-971255/.minikube/cert.pem, removing ...
	I0116 02:24:18.464996  991718 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17967-971255/.minikube/cert.pem
	I0116 02:24:18.465038  991718 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17967-971255/.minikube/cert.pem (1123 bytes)
	I0116 02:24:18.465102  991718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17967-971255/.minikube/key.pem
	I0116 02:24:18.465129  991718 exec_runner.go:144] found /home/jenkins/minikube-integration/17967-971255/.minikube/key.pem, removing ...
	I0116 02:24:18.465141  991718 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17967-971255/.minikube/key.pem
	I0116 02:24:18.465177  991718 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17967-971255/.minikube/key.pem (1675 bytes)
	I0116 02:24:18.465239  991718 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17967-971255/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca-key.pem org=jenkins.multinode-835787-m02 san=[192.168.39.15 192.168.39.15 localhost 127.0.0.1 minikube multinode-835787-m02]
	I0116 02:24:18.641554  991718 provision.go:172] copyRemoteCerts
	I0116 02:24:18.641629  991718 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0116 02:24:18.641664  991718 main.go:141] libmachine: (multinode-835787-m02) Calling .GetSSHHostname
	I0116 02:24:18.644722  991718 main.go:141] libmachine: (multinode-835787-m02) DBG | domain multinode-835787-m02 has defined MAC address 52:54:00:83:d4:5b in network mk-multinode-835787
	I0116 02:24:18.645091  991718 main.go:141] libmachine: (multinode-835787-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:d4:5b", ip: ""} in network mk-multinode-835787: {Iface:virbr1 ExpiryTime:2024-01-16 03:24:11 +0000 UTC Type:0 Mac:52:54:00:83:d4:5b Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:multinode-835787-m02 Clientid:01:52:54:00:83:d4:5b}
	I0116 02:24:18.645131  991718 main.go:141] libmachine: (multinode-835787-m02) DBG | domain multinode-835787-m02 has defined IP address 192.168.39.15 and MAC address 52:54:00:83:d4:5b in network mk-multinode-835787
	I0116 02:24:18.645403  991718 main.go:141] libmachine: (multinode-835787-m02) Calling .GetSSHPort
	I0116 02:24:18.645636  991718 main.go:141] libmachine: (multinode-835787-m02) Calling .GetSSHKeyPath
	I0116 02:24:18.645786  991718 main.go:141] libmachine: (multinode-835787-m02) Calling .GetSSHUsername
	I0116 02:24:18.645937  991718 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/multinode-835787-m02/id_rsa Username:docker}
	I0116 02:24:18.744458  991718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0116 02:24:18.744544  991718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0116 02:24:18.769179  991718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-971255/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0116 02:24:18.769253  991718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0116 02:24:18.792488  991718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-971255/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0116 02:24:18.792587  991718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0116 02:24:18.816057  991718 provision.go:86] duration metric: configureAuth took 358.406642ms
	I0116 02:24:18.816108  991718 buildroot.go:189] setting minikube options for container-runtime
	I0116 02:24:18.816324  991718 config.go:182] Loaded profile config "multinode-835787": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 02:24:18.816453  991718 main.go:141] libmachine: (multinode-835787-m02) Calling .GetSSHHostname
	I0116 02:24:18.819312  991718 main.go:141] libmachine: (multinode-835787-m02) DBG | domain multinode-835787-m02 has defined MAC address 52:54:00:83:d4:5b in network mk-multinode-835787
	I0116 02:24:18.819672  991718 main.go:141] libmachine: (multinode-835787-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:d4:5b", ip: ""} in network mk-multinode-835787: {Iface:virbr1 ExpiryTime:2024-01-16 03:24:11 +0000 UTC Type:0 Mac:52:54:00:83:d4:5b Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:multinode-835787-m02 Clientid:01:52:54:00:83:d4:5b}
	I0116 02:24:18.819697  991718 main.go:141] libmachine: (multinode-835787-m02) DBG | domain multinode-835787-m02 has defined IP address 192.168.39.15 and MAC address 52:54:00:83:d4:5b in network mk-multinode-835787
	I0116 02:24:18.819934  991718 main.go:141] libmachine: (multinode-835787-m02) Calling .GetSSHPort
	I0116 02:24:18.820172  991718 main.go:141] libmachine: (multinode-835787-m02) Calling .GetSSHKeyPath
	I0116 02:24:18.820395  991718 main.go:141] libmachine: (multinode-835787-m02) Calling .GetSSHKeyPath
	I0116 02:24:18.820553  991718 main.go:141] libmachine: (multinode-835787-m02) Calling .GetSSHUsername
	I0116 02:24:18.820734  991718 main.go:141] libmachine: Using SSH client type: native
	I0116 02:24:18.821047  991718 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.15 22 <nil> <nil>}
	I0116 02:24:18.821065  991718 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0116 02:24:19.171107  991718 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0116 02:24:19.171142  991718 main.go:141] libmachine: Checking connection to Docker...
	I0116 02:24:19.171154  991718 main.go:141] libmachine: (multinode-835787-m02) Calling .GetURL
	I0116 02:24:19.172510  991718 main.go:141] libmachine: (multinode-835787-m02) DBG | Using libvirt version 6000000
	I0116 02:24:19.174943  991718 main.go:141] libmachine: (multinode-835787-m02) DBG | domain multinode-835787-m02 has defined MAC address 52:54:00:83:d4:5b in network mk-multinode-835787
	I0116 02:24:19.175306  991718 main.go:141] libmachine: (multinode-835787-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:d4:5b", ip: ""} in network mk-multinode-835787: {Iface:virbr1 ExpiryTime:2024-01-16 03:24:11 +0000 UTC Type:0 Mac:52:54:00:83:d4:5b Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:multinode-835787-m02 Clientid:01:52:54:00:83:d4:5b}
	I0116 02:24:19.175338  991718 main.go:141] libmachine: (multinode-835787-m02) DBG | domain multinode-835787-m02 has defined IP address 192.168.39.15 and MAC address 52:54:00:83:d4:5b in network mk-multinode-835787
	I0116 02:24:19.175537  991718 main.go:141] libmachine: Docker is up and running!
	I0116 02:24:19.175555  991718 main.go:141] libmachine: Reticulating splines...
	I0116 02:24:19.175562  991718 client.go:171] LocalClient.Create took 23.454344598s
	I0116 02:24:19.175586  991718 start.go:167] duration metric: libmachine.API.Create for "multinode-835787" took 23.454400058s
	I0116 02:24:19.175596  991718 start.go:300] post-start starting for "multinode-835787-m02" (driver="kvm2")
	I0116 02:24:19.175607  991718 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0116 02:24:19.175623  991718 main.go:141] libmachine: (multinode-835787-m02) Calling .DriverName
	I0116 02:24:19.175940  991718 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0116 02:24:19.175984  991718 main.go:141] libmachine: (multinode-835787-m02) Calling .GetSSHHostname
	I0116 02:24:19.178401  991718 main.go:141] libmachine: (multinode-835787-m02) DBG | domain multinode-835787-m02 has defined MAC address 52:54:00:83:d4:5b in network mk-multinode-835787
	I0116 02:24:19.178837  991718 main.go:141] libmachine: (multinode-835787-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:d4:5b", ip: ""} in network mk-multinode-835787: {Iface:virbr1 ExpiryTime:2024-01-16 03:24:11 +0000 UTC Type:0 Mac:52:54:00:83:d4:5b Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:multinode-835787-m02 Clientid:01:52:54:00:83:d4:5b}
	I0116 02:24:19.178885  991718 main.go:141] libmachine: (multinode-835787-m02) DBG | domain multinode-835787-m02 has defined IP address 192.168.39.15 and MAC address 52:54:00:83:d4:5b in network mk-multinode-835787
	I0116 02:24:19.179031  991718 main.go:141] libmachine: (multinode-835787-m02) Calling .GetSSHPort
	I0116 02:24:19.179251  991718 main.go:141] libmachine: (multinode-835787-m02) Calling .GetSSHKeyPath
	I0116 02:24:19.179453  991718 main.go:141] libmachine: (multinode-835787-m02) Calling .GetSSHUsername
	I0116 02:24:19.179622  991718 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/multinode-835787-m02/id_rsa Username:docker}
	I0116 02:24:19.276364  991718 ssh_runner.go:195] Run: cat /etc/os-release
	I0116 02:24:19.280901  991718 command_runner.go:130] > NAME=Buildroot
	I0116 02:24:19.280928  991718 command_runner.go:130] > VERSION=2021.02.12-1-g19d536a-dirty
	I0116 02:24:19.280932  991718 command_runner.go:130] > ID=buildroot
	I0116 02:24:19.280938  991718 command_runner.go:130] > VERSION_ID=2021.02.12
	I0116 02:24:19.280942  991718 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0116 02:24:19.281243  991718 info.go:137] Remote host: Buildroot 2021.02.12
	I0116 02:24:19.281278  991718 filesync.go:126] Scanning /home/jenkins/minikube-integration/17967-971255/.minikube/addons for local assets ...
	I0116 02:24:19.281363  991718 filesync.go:126] Scanning /home/jenkins/minikube-integration/17967-971255/.minikube/files for local assets ...
	I0116 02:24:19.281454  991718 filesync.go:149] local asset: /home/jenkins/minikube-integration/17967-971255/.minikube/files/etc/ssl/certs/9784822.pem -> 9784822.pem in /etc/ssl/certs
	I0116 02:24:19.281468  991718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-971255/.minikube/files/etc/ssl/certs/9784822.pem -> /etc/ssl/certs/9784822.pem
	I0116 02:24:19.281571  991718 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0116 02:24:19.292730  991718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/files/etc/ssl/certs/9784822.pem --> /etc/ssl/certs/9784822.pem (1708 bytes)
	I0116 02:24:19.317985  991718 start.go:303] post-start completed in 142.370995ms
	I0116 02:24:19.318052  991718 main.go:141] libmachine: (multinode-835787-m02) Calling .GetConfigRaw
	I0116 02:24:19.318859  991718 main.go:141] libmachine: (multinode-835787-m02) Calling .GetIP
	I0116 02:24:19.322171  991718 main.go:141] libmachine: (multinode-835787-m02) DBG | domain multinode-835787-m02 has defined MAC address 52:54:00:83:d4:5b in network mk-multinode-835787
	I0116 02:24:19.322633  991718 main.go:141] libmachine: (multinode-835787-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:d4:5b", ip: ""} in network mk-multinode-835787: {Iface:virbr1 ExpiryTime:2024-01-16 03:24:11 +0000 UTC Type:0 Mac:52:54:00:83:d4:5b Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:multinode-835787-m02 Clientid:01:52:54:00:83:d4:5b}
	I0116 02:24:19.322664  991718 main.go:141] libmachine: (multinode-835787-m02) DBG | domain multinode-835787-m02 has defined IP address 192.168.39.15 and MAC address 52:54:00:83:d4:5b in network mk-multinode-835787
	I0116 02:24:19.322976  991718 profile.go:148] Saving config to /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/multinode-835787/config.json ...
	I0116 02:24:19.323239  991718 start.go:128] duration metric: createHost completed in 23.621940631s
	I0116 02:24:19.323268  991718 main.go:141] libmachine: (multinode-835787-m02) Calling .GetSSHHostname
	I0116 02:24:19.325783  991718 main.go:141] libmachine: (multinode-835787-m02) DBG | domain multinode-835787-m02 has defined MAC address 52:54:00:83:d4:5b in network mk-multinode-835787
	I0116 02:24:19.326126  991718 main.go:141] libmachine: (multinode-835787-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:d4:5b", ip: ""} in network mk-multinode-835787: {Iface:virbr1 ExpiryTime:2024-01-16 03:24:11 +0000 UTC Type:0 Mac:52:54:00:83:d4:5b Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:multinode-835787-m02 Clientid:01:52:54:00:83:d4:5b}
	I0116 02:24:19.326152  991718 main.go:141] libmachine: (multinode-835787-m02) DBG | domain multinode-835787-m02 has defined IP address 192.168.39.15 and MAC address 52:54:00:83:d4:5b in network mk-multinode-835787
	I0116 02:24:19.326391  991718 main.go:141] libmachine: (multinode-835787-m02) Calling .GetSSHPort
	I0116 02:24:19.326652  991718 main.go:141] libmachine: (multinode-835787-m02) Calling .GetSSHKeyPath
	I0116 02:24:19.326863  991718 main.go:141] libmachine: (multinode-835787-m02) Calling .GetSSHKeyPath
	I0116 02:24:19.327024  991718 main.go:141] libmachine: (multinode-835787-m02) Calling .GetSSHUsername
	I0116 02:24:19.327208  991718 main.go:141] libmachine: Using SSH client type: native
	I0116 02:24:19.327531  991718 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.15 22 <nil> <nil>}
	I0116 02:24:19.327543  991718 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0116 02:24:19.462726  991718 main.go:141] libmachine: SSH cmd err, output: <nil>: 1705371859.441987603
	
	I0116 02:24:19.462755  991718 fix.go:206] guest clock: 1705371859.441987603
	I0116 02:24:19.462763  991718 fix.go:219] Guest: 2024-01-16 02:24:19.441987603 +0000 UTC Remote: 2024-01-16 02:24:19.32325458 +0000 UTC m=+92.060889131 (delta=118.733023ms)
	I0116 02:24:19.462780  991718 fix.go:190] guest clock delta is within tolerance: 118.733023ms
	I0116 02:24:19.462785  991718 start.go:83] releasing machines lock for "multinode-835787-m02", held for 23.761588075s
	I0116 02:24:19.462809  991718 main.go:141] libmachine: (multinode-835787-m02) Calling .DriverName
	I0116 02:24:19.463136  991718 main.go:141] libmachine: (multinode-835787-m02) Calling .GetIP
	I0116 02:24:19.466149  991718 main.go:141] libmachine: (multinode-835787-m02) DBG | domain multinode-835787-m02 has defined MAC address 52:54:00:83:d4:5b in network mk-multinode-835787
	I0116 02:24:19.466535  991718 main.go:141] libmachine: (multinode-835787-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:d4:5b", ip: ""} in network mk-multinode-835787: {Iface:virbr1 ExpiryTime:2024-01-16 03:24:11 +0000 UTC Type:0 Mac:52:54:00:83:d4:5b Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:multinode-835787-m02 Clientid:01:52:54:00:83:d4:5b}
	I0116 02:24:19.466570  991718 main.go:141] libmachine: (multinode-835787-m02) DBG | domain multinode-835787-m02 has defined IP address 192.168.39.15 and MAC address 52:54:00:83:d4:5b in network mk-multinode-835787
	I0116 02:24:19.469262  991718 out.go:177] * Found network options:
	I0116 02:24:19.470848  991718 out.go:177]   - NO_PROXY=192.168.39.50
	W0116 02:24:19.472416  991718 proxy.go:119] fail to check proxy env: Error ip not in block
	I0116 02:24:19.472477  991718 main.go:141] libmachine: (multinode-835787-m02) Calling .DriverName
	I0116 02:24:19.473165  991718 main.go:141] libmachine: (multinode-835787-m02) Calling .DriverName
	I0116 02:24:19.473381  991718 main.go:141] libmachine: (multinode-835787-m02) Calling .DriverName
	I0116 02:24:19.473475  991718 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0116 02:24:19.473517  991718 main.go:141] libmachine: (multinode-835787-m02) Calling .GetSSHHostname
	W0116 02:24:19.473624  991718 proxy.go:119] fail to check proxy env: Error ip not in block
	I0116 02:24:19.473717  991718 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0116 02:24:19.473744  991718 main.go:141] libmachine: (multinode-835787-m02) Calling .GetSSHHostname
	I0116 02:24:19.476328  991718 main.go:141] libmachine: (multinode-835787-m02) DBG | domain multinode-835787-m02 has defined MAC address 52:54:00:83:d4:5b in network mk-multinode-835787
	I0116 02:24:19.476650  991718 main.go:141] libmachine: (multinode-835787-m02) DBG | domain multinode-835787-m02 has defined MAC address 52:54:00:83:d4:5b in network mk-multinode-835787
	I0116 02:24:19.476748  991718 main.go:141] libmachine: (multinode-835787-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:d4:5b", ip: ""} in network mk-multinode-835787: {Iface:virbr1 ExpiryTime:2024-01-16 03:24:11 +0000 UTC Type:0 Mac:52:54:00:83:d4:5b Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:multinode-835787-m02 Clientid:01:52:54:00:83:d4:5b}
	I0116 02:24:19.476798  991718 main.go:141] libmachine: (multinode-835787-m02) DBG | domain multinode-835787-m02 has defined IP address 192.168.39.15 and MAC address 52:54:00:83:d4:5b in network mk-multinode-835787
	I0116 02:24:19.476920  991718 main.go:141] libmachine: (multinode-835787-m02) Calling .GetSSHPort
	I0116 02:24:19.477002  991718 main.go:141] libmachine: (multinode-835787-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:d4:5b", ip: ""} in network mk-multinode-835787: {Iface:virbr1 ExpiryTime:2024-01-16 03:24:11 +0000 UTC Type:0 Mac:52:54:00:83:d4:5b Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:multinode-835787-m02 Clientid:01:52:54:00:83:d4:5b}
	I0116 02:24:19.477029  991718 main.go:141] libmachine: (multinode-835787-m02) DBG | domain multinode-835787-m02 has defined IP address 192.168.39.15 and MAC address 52:54:00:83:d4:5b in network mk-multinode-835787
	I0116 02:24:19.477096  991718 main.go:141] libmachine: (multinode-835787-m02) Calling .GetSSHKeyPath
	I0116 02:24:19.477150  991718 main.go:141] libmachine: (multinode-835787-m02) Calling .GetSSHPort
	I0116 02:24:19.477252  991718 main.go:141] libmachine: (multinode-835787-m02) Calling .GetSSHKeyPath
	I0116 02:24:19.477260  991718 main.go:141] libmachine: (multinode-835787-m02) Calling .GetSSHUsername
	I0116 02:24:19.477418  991718 main.go:141] libmachine: (multinode-835787-m02) Calling .GetSSHUsername
	I0116 02:24:19.477434  991718 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/multinode-835787-m02/id_rsa Username:docker}
	I0116 02:24:19.477572  991718 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/multinode-835787-m02/id_rsa Username:docker}
	I0116 02:24:19.731343  991718 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0116 02:24:19.731478  991718 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0116 02:24:19.737570  991718 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0116 02:24:19.737627  991718 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0116 02:24:19.737741  991718 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0116 02:24:19.753628  991718 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0116 02:24:19.753671  991718 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0116 02:24:19.753680  991718 start.go:475] detecting cgroup driver to use...
	I0116 02:24:19.753782  991718 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0116 02:24:19.768866  991718 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0116 02:24:19.783770  991718 docker.go:217] disabling cri-docker service (if available) ...
	I0116 02:24:19.783845  991718 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0116 02:24:19.798544  991718 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0116 02:24:19.814399  991718 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0116 02:24:19.830594  991718 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/cri-docker.socket.
	I0116 02:24:19.929095  991718 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0116 02:24:19.944039  991718 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I0116 02:24:20.055169  991718 docker.go:233] disabling docker service ...
	I0116 02:24:20.055259  991718 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0116 02:24:20.069703  991718 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0116 02:24:20.081036  991718 command_runner.go:130] ! Failed to stop docker.service: Unit docker.service not loaded.
	I0116 02:24:20.081831  991718 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0116 02:24:20.201393  991718 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0116 02:24:20.201489  991718 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0116 02:24:20.313648  991718 command_runner.go:130] ! Unit docker.service does not exist, proceeding anyway.
	I0116 02:24:20.313686  991718 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0116 02:24:20.313765  991718 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0116 02:24:20.329039  991718 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0116 02:24:20.347889  991718 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0116 02:24:20.347951  991718 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0116 02:24:20.348008  991718 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 02:24:20.357688  991718 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0116 02:24:20.357783  991718 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 02:24:20.368764  991718 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 02:24:20.378968  991718 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 02:24:20.390573  991718 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0116 02:24:20.401250  991718 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0116 02:24:20.410142  991718 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0116 02:24:20.410337  991718 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0116 02:24:20.410403  991718 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0116 02:24:20.423354  991718 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0116 02:24:20.433170  991718 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0116 02:24:20.561772  991718 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0116 02:24:20.745003  991718 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0116 02:24:20.745081  991718 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0116 02:24:20.750449  991718 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0116 02:24:20.750484  991718 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0116 02:24:20.750492  991718 command_runner.go:130] > Device: 16h/22d	Inode: 721         Links: 1
	I0116 02:24:20.750501  991718 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0116 02:24:20.750508  991718 command_runner.go:130] > Access: 2024-01-16 02:24:20.712169425 +0000
	I0116 02:24:20.750517  991718 command_runner.go:130] > Modify: 2024-01-16 02:24:20.712169425 +0000
	I0116 02:24:20.750525  991718 command_runner.go:130] > Change: 2024-01-16 02:24:20.712169425 +0000
	I0116 02:24:20.750531  991718 command_runner.go:130] >  Birth: -
	I0116 02:24:20.750694  991718 start.go:543] Will wait 60s for crictl version
	I0116 02:24:20.750807  991718 ssh_runner.go:195] Run: which crictl
	I0116 02:24:20.754653  991718 command_runner.go:130] > /usr/bin/crictl
	I0116 02:24:20.755005  991718 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0116 02:24:20.793058  991718 command_runner.go:130] > Version:  0.1.0
	I0116 02:24:20.793087  991718 command_runner.go:130] > RuntimeName:  cri-o
	I0116 02:24:20.793092  991718 command_runner.go:130] > RuntimeVersion:  1.24.1
	I0116 02:24:20.793097  991718 command_runner.go:130] > RuntimeApiVersion:  v1
	I0116 02:24:20.794811  991718 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0116 02:24:20.794904  991718 ssh_runner.go:195] Run: crio --version
	I0116 02:24:20.845883  991718 command_runner.go:130] > crio version 1.24.1
	I0116 02:24:20.845909  991718 command_runner.go:130] > Version:          1.24.1
	I0116 02:24:20.845920  991718 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0116 02:24:20.845926  991718 command_runner.go:130] > GitTreeState:     dirty
	I0116 02:24:20.845939  991718 command_runner.go:130] > BuildDate:        2023-12-28T22:46:29Z
	I0116 02:24:20.845946  991718 command_runner.go:130] > GoVersion:        go1.19.9
	I0116 02:24:20.845952  991718 command_runner.go:130] > Compiler:         gc
	I0116 02:24:20.845959  991718 command_runner.go:130] > Platform:         linux/amd64
	I0116 02:24:20.845967  991718 command_runner.go:130] > Linkmode:         dynamic
	I0116 02:24:20.845979  991718 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0116 02:24:20.845991  991718 command_runner.go:130] > SeccompEnabled:   true
	I0116 02:24:20.845999  991718 command_runner.go:130] > AppArmorEnabled:  false
	I0116 02:24:20.846083  991718 ssh_runner.go:195] Run: crio --version
	I0116 02:24:20.890419  991718 command_runner.go:130] > crio version 1.24.1
	I0116 02:24:20.890442  991718 command_runner.go:130] > Version:          1.24.1
	I0116 02:24:20.890450  991718 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0116 02:24:20.890454  991718 command_runner.go:130] > GitTreeState:     dirty
	I0116 02:24:20.890461  991718 command_runner.go:130] > BuildDate:        2023-12-28T22:46:29Z
	I0116 02:24:20.890466  991718 command_runner.go:130] > GoVersion:        go1.19.9
	I0116 02:24:20.890470  991718 command_runner.go:130] > Compiler:         gc
	I0116 02:24:20.890474  991718 command_runner.go:130] > Platform:         linux/amd64
	I0116 02:24:20.890479  991718 command_runner.go:130] > Linkmode:         dynamic
	I0116 02:24:20.890491  991718 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0116 02:24:20.890497  991718 command_runner.go:130] > SeccompEnabled:   true
	I0116 02:24:20.890503  991718 command_runner.go:130] > AppArmorEnabled:  false
	I0116 02:24:20.892879  991718 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0116 02:24:20.894594  991718 out.go:177]   - env NO_PROXY=192.168.39.50
	I0116 02:24:20.896190  991718 main.go:141] libmachine: (multinode-835787-m02) Calling .GetIP
	I0116 02:24:20.899508  991718 main.go:141] libmachine: (multinode-835787-m02) DBG | domain multinode-835787-m02 has defined MAC address 52:54:00:83:d4:5b in network mk-multinode-835787
	I0116 02:24:20.899878  991718 main.go:141] libmachine: (multinode-835787-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:d4:5b", ip: ""} in network mk-multinode-835787: {Iface:virbr1 ExpiryTime:2024-01-16 03:24:11 +0000 UTC Type:0 Mac:52:54:00:83:d4:5b Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:multinode-835787-m02 Clientid:01:52:54:00:83:d4:5b}
	I0116 02:24:20.899911  991718 main.go:141] libmachine: (multinode-835787-m02) DBG | domain multinode-835787-m02 has defined IP address 192.168.39.15 and MAC address 52:54:00:83:d4:5b in network mk-multinode-835787
	I0116 02:24:20.900147  991718 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0116 02:24:20.905185  991718 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 02:24:20.920773  991718 certs.go:56] Setting up /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/multinode-835787 for IP: 192.168.39.15
	I0116 02:24:20.920821  991718 certs.go:190] acquiring lock for shared ca certs: {Name:mk7c5920652340a030ac1e36e0fe21cc8437c491 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:24:20.920999  991718 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17967-971255/.minikube/ca.key
	I0116 02:24:20.921051  991718 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17967-971255/.minikube/proxy-client-ca.key
	I0116 02:24:20.921070  991718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-971255/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0116 02:24:20.921091  991718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-971255/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0116 02:24:20.921116  991718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-971255/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0116 02:24:20.921140  991718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-971255/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0116 02:24:20.921215  991718 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/home/jenkins/minikube-integration/17967-971255/.minikube/certs/978482.pem (1338 bytes)
	W0116 02:24:20.921260  991718 certs.go:433] ignoring /home/jenkins/minikube-integration/17967-971255/.minikube/certs/home/jenkins/minikube-integration/17967-971255/.minikube/certs/978482_empty.pem, impossibly tiny 0 bytes
	I0116 02:24:20.921276  991718 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca-key.pem (1675 bytes)
	I0116 02:24:20.921313  991718 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca.pem (1082 bytes)
	I0116 02:24:20.921353  991718 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/home/jenkins/minikube-integration/17967-971255/.minikube/certs/cert.pem (1123 bytes)
	I0116 02:24:20.921387  991718 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/home/jenkins/minikube-integration/17967-971255/.minikube/certs/key.pem (1675 bytes)
	I0116 02:24:20.921458  991718 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-971255/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17967-971255/.minikube/files/etc/ssl/certs/9784822.pem (1708 bytes)
	I0116 02:24:20.921500  991718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-971255/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0116 02:24:20.921521  991718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/978482.pem -> /usr/share/ca-certificates/978482.pem
	I0116 02:24:20.921546  991718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-971255/.minikube/files/etc/ssl/certs/9784822.pem -> /usr/share/ca-certificates/9784822.pem
	I0116 02:24:20.922008  991718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0116 02:24:20.947239  991718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0116 02:24:20.972650  991718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0116 02:24:21.000364  991718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0116 02:24:21.028061  991718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0116 02:24:21.053166  991718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/certs/978482.pem --> /usr/share/ca-certificates/978482.pem (1338 bytes)
	I0116 02:24:21.078268  991718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/files/etc/ssl/certs/9784822.pem --> /usr/share/ca-certificates/9784822.pem (1708 bytes)
	I0116 02:24:21.106049  991718 ssh_runner.go:195] Run: openssl version
	I0116 02:24:21.111528  991718 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0116 02:24:21.111858  991718 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0116 02:24:21.122728  991718 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0116 02:24:21.127840  991718 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jan 16 02:01 /usr/share/ca-certificates/minikubeCA.pem
	I0116 02:24:21.127873  991718 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 16 02:01 /usr/share/ca-certificates/minikubeCA.pem
	I0116 02:24:21.127935  991718 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0116 02:24:21.133748  991718 command_runner.go:130] > b5213941
	I0116 02:24:21.133869  991718 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0116 02:24:21.144061  991718 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/978482.pem && ln -fs /usr/share/ca-certificates/978482.pem /etc/ssl/certs/978482.pem"
	I0116 02:24:21.156124  991718 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/978482.pem
	I0116 02:24:21.161355  991718 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jan 16 02:10 /usr/share/ca-certificates/978482.pem
	I0116 02:24:21.161404  991718 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 16 02:10 /usr/share/ca-certificates/978482.pem
	I0116 02:24:21.161530  991718 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/978482.pem
	I0116 02:24:21.167345  991718 command_runner.go:130] > 51391683
	I0116 02:24:21.167697  991718 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/978482.pem /etc/ssl/certs/51391683.0"
	I0116 02:24:21.178469  991718 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9784822.pem && ln -fs /usr/share/ca-certificates/9784822.pem /etc/ssl/certs/9784822.pem"
	I0116 02:24:21.189361  991718 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9784822.pem
	I0116 02:24:21.194299  991718 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jan 16 02:10 /usr/share/ca-certificates/9784822.pem
	I0116 02:24:21.194523  991718 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 16 02:10 /usr/share/ca-certificates/9784822.pem
	I0116 02:24:21.194593  991718 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9784822.pem
	I0116 02:24:21.200536  991718 command_runner.go:130] > 3ec20f2e
	I0116 02:24:21.200763  991718 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9784822.pem /etc/ssl/certs/3ec20f2e.0"
	I0116 02:24:21.211629  991718 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0116 02:24:21.216429  991718 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0116 02:24:21.216473  991718 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0116 02:24:21.216572  991718 ssh_runner.go:195] Run: crio config
	I0116 02:24:21.274483  991718 command_runner.go:130] ! time="2024-01-16 02:24:21.257776775Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I0116 02:24:21.274516  991718 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0116 02:24:21.283587  991718 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0116 02:24:21.283619  991718 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0116 02:24:21.283629  991718 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0116 02:24:21.283634  991718 command_runner.go:130] > #
	I0116 02:24:21.283643  991718 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0116 02:24:21.283653  991718 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0116 02:24:21.283675  991718 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0116 02:24:21.283691  991718 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0116 02:24:21.283711  991718 command_runner.go:130] > # reload'.
	I0116 02:24:21.283722  991718 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0116 02:24:21.283734  991718 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0116 02:24:21.283748  991718 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0116 02:24:21.283762  991718 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0116 02:24:21.283771  991718 command_runner.go:130] > [crio]
	I0116 02:24:21.283784  991718 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0116 02:24:21.283796  991718 command_runner.go:130] > # containers images, in this directory.
	I0116 02:24:21.283807  991718 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0116 02:24:21.283822  991718 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0116 02:24:21.283834  991718 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0116 02:24:21.283848  991718 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0116 02:24:21.283862  991718 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0116 02:24:21.283874  991718 command_runner.go:130] > storage_driver = "overlay"
	I0116 02:24:21.283884  991718 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0116 02:24:21.283899  991718 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0116 02:24:21.283909  991718 command_runner.go:130] > storage_option = [
	I0116 02:24:21.283918  991718 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0116 02:24:21.283930  991718 command_runner.go:130] > ]
	I0116 02:24:21.283944  991718 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0116 02:24:21.283959  991718 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0116 02:24:21.283970  991718 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0116 02:24:21.283986  991718 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0116 02:24:21.284000  991718 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0116 02:24:21.284010  991718 command_runner.go:130] > # always happen on a node reboot
	I0116 02:24:21.284019  991718 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0116 02:24:21.284033  991718 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0116 02:24:21.284046  991718 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0116 02:24:21.284070  991718 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0116 02:24:21.284082  991718 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0116 02:24:21.284098  991718 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0116 02:24:21.284114  991718 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0116 02:24:21.284126  991718 command_runner.go:130] > # internal_wipe = true
	I0116 02:24:21.284136  991718 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0116 02:24:21.284150  991718 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0116 02:24:21.284163  991718 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0116 02:24:21.284177  991718 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0116 02:24:21.284191  991718 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0116 02:24:21.284200  991718 command_runner.go:130] > [crio.api]
	I0116 02:24:21.284209  991718 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0116 02:24:21.284216  991718 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0116 02:24:21.284226  991718 command_runner.go:130] > # IP address on which the stream server will listen.
	I0116 02:24:21.284234  991718 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0116 02:24:21.284254  991718 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0116 02:24:21.284267  991718 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0116 02:24:21.284276  991718 command_runner.go:130] > # stream_port = "0"
	I0116 02:24:21.284286  991718 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0116 02:24:21.284297  991718 command_runner.go:130] > # stream_enable_tls = false
	I0116 02:24:21.284309  991718 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0116 02:24:21.284325  991718 command_runner.go:130] > # stream_idle_timeout = ""
	I0116 02:24:21.284340  991718 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0116 02:24:21.284354  991718 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0116 02:24:21.284364  991718 command_runner.go:130] > # minutes.
	I0116 02:24:21.284373  991718 command_runner.go:130] > # stream_tls_cert = ""
	I0116 02:24:21.284390  991718 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0116 02:24:21.284410  991718 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0116 02:24:21.284420  991718 command_runner.go:130] > # stream_tls_key = ""
	I0116 02:24:21.284432  991718 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0116 02:24:21.284446  991718 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0116 02:24:21.284460  991718 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0116 02:24:21.284470  991718 command_runner.go:130] > # stream_tls_ca = ""
	I0116 02:24:21.284487  991718 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0116 02:24:21.284499  991718 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0116 02:24:21.284515  991718 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0116 02:24:21.284526  991718 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0116 02:24:21.284560  991718 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0116 02:24:21.284574  991718 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0116 02:24:21.284581  991718 command_runner.go:130] > [crio.runtime]
	I0116 02:24:21.284591  991718 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0116 02:24:21.284604  991718 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0116 02:24:21.284615  991718 command_runner.go:130] > # "nofile=1024:2048"
	I0116 02:24:21.284627  991718 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0116 02:24:21.284641  991718 command_runner.go:130] > # default_ulimits = [
	I0116 02:24:21.284650  991718 command_runner.go:130] > # ]
	I0116 02:24:21.284666  991718 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0116 02:24:21.284676  991718 command_runner.go:130] > # no_pivot = false
	I0116 02:24:21.284687  991718 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0116 02:24:21.284701  991718 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0116 02:24:21.284714  991718 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0116 02:24:21.284727  991718 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0116 02:24:21.284740  991718 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0116 02:24:21.284754  991718 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0116 02:24:21.284766  991718 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0116 02:24:21.284777  991718 command_runner.go:130] > # Cgroup setting for conmon
	I0116 02:24:21.284795  991718 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0116 02:24:21.284805  991718 command_runner.go:130] > conmon_cgroup = "pod"
	I0116 02:24:21.284819  991718 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0116 02:24:21.284831  991718 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0116 02:24:21.284846  991718 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0116 02:24:21.284856  991718 command_runner.go:130] > conmon_env = [
	I0116 02:24:21.284872  991718 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0116 02:24:21.284882  991718 command_runner.go:130] > ]
	I0116 02:24:21.284892  991718 command_runner.go:130] > # Additional environment variables to set for all the
	I0116 02:24:21.284904  991718 command_runner.go:130] > # containers. These are overridden if set in the
	I0116 02:24:21.284917  991718 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0116 02:24:21.284927  991718 command_runner.go:130] > # default_env = [
	I0116 02:24:21.284934  991718 command_runner.go:130] > # ]
	I0116 02:24:21.284945  991718 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0116 02:24:21.284955  991718 command_runner.go:130] > # selinux = false
	I0116 02:24:21.284970  991718 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0116 02:24:21.284984  991718 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0116 02:24:21.284997  991718 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0116 02:24:21.285008  991718 command_runner.go:130] > # seccomp_profile = ""
	I0116 02:24:21.285018  991718 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0116 02:24:21.285032  991718 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0116 02:24:21.285046  991718 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0116 02:24:21.285057  991718 command_runner.go:130] > # which might increase security.
	I0116 02:24:21.285069  991718 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0116 02:24:21.285091  991718 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0116 02:24:21.285105  991718 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0116 02:24:21.285117  991718 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0116 02:24:21.285131  991718 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0116 02:24:21.285143  991718 command_runner.go:130] > # This option supports live configuration reload.
	I0116 02:24:21.285155  991718 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0116 02:24:21.285167  991718 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0116 02:24:21.285176  991718 command_runner.go:130] > # the cgroup blockio controller.
	I0116 02:24:21.285187  991718 command_runner.go:130] > # blockio_config_file = ""
	I0116 02:24:21.285199  991718 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0116 02:24:21.285210  991718 command_runner.go:130] > # irqbalance daemon.
	I0116 02:24:21.285227  991718 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0116 02:24:21.285242  991718 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0116 02:24:21.285254  991718 command_runner.go:130] > # This option supports live configuration reload.
	I0116 02:24:21.285264  991718 command_runner.go:130] > # rdt_config_file = ""
	I0116 02:24:21.285277  991718 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0116 02:24:21.285287  991718 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0116 02:24:21.285298  991718 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0116 02:24:21.285312  991718 command_runner.go:130] > # separate_pull_cgroup = ""
	I0116 02:24:21.285326  991718 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0116 02:24:21.285340  991718 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0116 02:24:21.285350  991718 command_runner.go:130] > # will be added.
	I0116 02:24:21.285361  991718 command_runner.go:130] > # default_capabilities = [
	I0116 02:24:21.285370  991718 command_runner.go:130] > # 	"CHOWN",
	I0116 02:24:21.285378  991718 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0116 02:24:21.285389  991718 command_runner.go:130] > # 	"FSETID",
	I0116 02:24:21.285398  991718 command_runner.go:130] > # 	"FOWNER",
	I0116 02:24:21.285407  991718 command_runner.go:130] > # 	"SETGID",
	I0116 02:24:21.285416  991718 command_runner.go:130] > # 	"SETUID",
	I0116 02:24:21.285425  991718 command_runner.go:130] > # 	"SETPCAP",
	I0116 02:24:21.285435  991718 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0116 02:24:21.285445  991718 command_runner.go:130] > # 	"KILL",
	I0116 02:24:21.285454  991718 command_runner.go:130] > # ]
	I0116 02:24:21.285465  991718 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0116 02:24:21.285479  991718 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0116 02:24:21.285489  991718 command_runner.go:130] > # default_sysctls = [
	I0116 02:24:21.285502  991718 command_runner.go:130] > # ]
	I0116 02:24:21.285514  991718 command_runner.go:130] > # List of devices on the host that a
	I0116 02:24:21.285528  991718 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0116 02:24:21.285539  991718 command_runner.go:130] > # allowed_devices = [
	I0116 02:24:21.285549  991718 command_runner.go:130] > # 	"/dev/fuse",
	I0116 02:24:21.285558  991718 command_runner.go:130] > # ]
	I0116 02:24:21.285568  991718 command_runner.go:130] > # List of additional devices. specified as
	I0116 02:24:21.285583  991718 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0116 02:24:21.285596  991718 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0116 02:24:21.285641  991718 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0116 02:24:21.285651  991718 command_runner.go:130] > # additional_devices = [
	I0116 02:24:21.285658  991718 command_runner.go:130] > # ]
	I0116 02:24:21.285677  991718 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0116 02:24:21.285688  991718 command_runner.go:130] > # cdi_spec_dirs = [
	I0116 02:24:21.285698  991718 command_runner.go:130] > # 	"/etc/cdi",
	I0116 02:24:21.285711  991718 command_runner.go:130] > # 	"/var/run/cdi",
	I0116 02:24:21.285720  991718 command_runner.go:130] > # ]
	I0116 02:24:21.285732  991718 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0116 02:24:21.285748  991718 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0116 02:24:21.285759  991718 command_runner.go:130] > # Defaults to false.
	I0116 02:24:21.285771  991718 command_runner.go:130] > # device_ownership_from_security_context = false
	I0116 02:24:21.285785  991718 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0116 02:24:21.285797  991718 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0116 02:24:21.285820  991718 command_runner.go:130] > # hooks_dir = [
	I0116 02:24:21.285833  991718 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0116 02:24:21.285853  991718 command_runner.go:130] > # ]
	I0116 02:24:21.285867  991718 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0116 02:24:21.285887  991718 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0116 02:24:21.285899  991718 command_runner.go:130] > # its default mounts from the following two files:
	I0116 02:24:21.285906  991718 command_runner.go:130] > #
	I0116 02:24:21.285921  991718 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0116 02:24:21.285934  991718 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0116 02:24:21.285948  991718 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0116 02:24:21.285957  991718 command_runner.go:130] > #
	I0116 02:24:21.285968  991718 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0116 02:24:21.285982  991718 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0116 02:24:21.286001  991718 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0116 02:24:21.286013  991718 command_runner.go:130] > #      only add mounts it finds in this file.
	I0116 02:24:21.286021  991718 command_runner.go:130] > #
	I0116 02:24:21.286030  991718 command_runner.go:130] > # default_mounts_file = ""
	I0116 02:24:21.286042  991718 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0116 02:24:21.286057  991718 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0116 02:24:21.286067  991718 command_runner.go:130] > pids_limit = 1024
	I0116 02:24:21.286078  991718 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0116 02:24:21.286092  991718 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0116 02:24:21.286106  991718 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0116 02:24:21.286123  991718 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0116 02:24:21.286133  991718 command_runner.go:130] > # log_size_max = -1
	I0116 02:24:21.286148  991718 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0116 02:24:21.286158  991718 command_runner.go:130] > # log_to_journald = false
	I0116 02:24:21.286169  991718 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0116 02:24:21.286182  991718 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0116 02:24:21.286194  991718 command_runner.go:130] > # Path to directory for container attach sockets.
	I0116 02:24:21.286207  991718 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0116 02:24:21.286223  991718 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0116 02:24:21.286234  991718 command_runner.go:130] > # bind_mount_prefix = ""
	I0116 02:24:21.286246  991718 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0116 02:24:21.286253  991718 command_runner.go:130] > # read_only = false
	I0116 02:24:21.286268  991718 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0116 02:24:21.286282  991718 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0116 02:24:21.286293  991718 command_runner.go:130] > # live configuration reload.
	I0116 02:24:21.286301  991718 command_runner.go:130] > # log_level = "info"
	I0116 02:24:21.286314  991718 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0116 02:24:21.286326  991718 command_runner.go:130] > # This option supports live configuration reload.
	I0116 02:24:21.286337  991718 command_runner.go:130] > # log_filter = ""
	I0116 02:24:21.286351  991718 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0116 02:24:21.286365  991718 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0116 02:24:21.286376  991718 command_runner.go:130] > # separated by comma.
	I0116 02:24:21.286387  991718 command_runner.go:130] > # uid_mappings = ""
	I0116 02:24:21.286400  991718 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0116 02:24:21.286414  991718 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0116 02:24:21.286423  991718 command_runner.go:130] > # separated by comma.
	I0116 02:24:21.286438  991718 command_runner.go:130] > # gid_mappings = ""
	I0116 02:24:21.286452  991718 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0116 02:24:21.286466  991718 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0116 02:24:21.286480  991718 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0116 02:24:21.286491  991718 command_runner.go:130] > # minimum_mappable_uid = -1
	I0116 02:24:21.286507  991718 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0116 02:24:21.286521  991718 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0116 02:24:21.286535  991718 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0116 02:24:21.286546  991718 command_runner.go:130] > # minimum_mappable_gid = -1
	I0116 02:24:21.286558  991718 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0116 02:24:21.286571  991718 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0116 02:24:21.286584  991718 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0116 02:24:21.286595  991718 command_runner.go:130] > # ctr_stop_timeout = 30
	I0116 02:24:21.286606  991718 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0116 02:24:21.286619  991718 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0116 02:24:21.286631  991718 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0116 02:24:21.286644  991718 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0116 02:24:21.286657  991718 command_runner.go:130] > drop_infra_ctr = false
	I0116 02:24:21.286680  991718 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0116 02:24:21.286694  991718 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0116 02:24:21.286710  991718 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0116 02:24:21.286720  991718 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0116 02:24:21.286734  991718 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0116 02:24:21.286747  991718 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0116 02:24:21.286756  991718 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0116 02:24:21.286769  991718 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0116 02:24:21.286780  991718 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0116 02:24:21.286793  991718 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0116 02:24:21.286807  991718 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0116 02:24:21.286821  991718 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0116 02:24:21.286832  991718 command_runner.go:130] > # default_runtime = "runc"
	I0116 02:24:21.286842  991718 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0116 02:24:21.286863  991718 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0116 02:24:21.286882  991718 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0116 02:24:21.286894  991718 command_runner.go:130] > # creation as a file is not desired either.
	I0116 02:24:21.286911  991718 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0116 02:24:21.286928  991718 command_runner.go:130] > # the hostname is being managed dynamically.
	I0116 02:24:21.286937  991718 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0116 02:24:21.286947  991718 command_runner.go:130] > # ]
	I0116 02:24:21.286960  991718 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0116 02:24:21.286975  991718 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0116 02:24:21.286989  991718 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0116 02:24:21.287003  991718 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0116 02:24:21.287011  991718 command_runner.go:130] > #
	I0116 02:24:21.287019  991718 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0116 02:24:21.287031  991718 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0116 02:24:21.287040  991718 command_runner.go:130] > #  runtime_type = "oci"
	I0116 02:24:21.287052  991718 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0116 02:24:21.287064  991718 command_runner.go:130] > #  privileged_without_host_devices = false
	I0116 02:24:21.287074  991718 command_runner.go:130] > #  allowed_annotations = []
	I0116 02:24:21.287082  991718 command_runner.go:130] > # Where:
	I0116 02:24:21.287094  991718 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0116 02:24:21.287109  991718 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0116 02:24:21.287123  991718 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0116 02:24:21.287141  991718 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0116 02:24:21.287151  991718 command_runner.go:130] > #   in $PATH.
	I0116 02:24:21.287164  991718 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0116 02:24:21.287174  991718 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0116 02:24:21.287188  991718 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0116 02:24:21.287197  991718 command_runner.go:130] > #   state.
	I0116 02:24:21.287209  991718 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0116 02:24:21.287222  991718 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0116 02:24:21.287236  991718 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0116 02:24:21.287249  991718 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0116 02:24:21.287263  991718 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0116 02:24:21.287277  991718 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0116 02:24:21.287288  991718 command_runner.go:130] > #   The currently recognized values are:
	I0116 02:24:21.287300  991718 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0116 02:24:21.287320  991718 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0116 02:24:21.287334  991718 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0116 02:24:21.287348  991718 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0116 02:24:21.287363  991718 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0116 02:24:21.287382  991718 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0116 02:24:21.287396  991718 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0116 02:24:21.287411  991718 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0116 02:24:21.287423  991718 command_runner.go:130] > #   should be moved to the container's cgroup
	I0116 02:24:21.287434  991718 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0116 02:24:21.287446  991718 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0116 02:24:21.287456  991718 command_runner.go:130] > runtime_type = "oci"
	I0116 02:24:21.287467  991718 command_runner.go:130] > runtime_root = "/run/runc"
	I0116 02:24:21.287477  991718 command_runner.go:130] > runtime_config_path = ""
	I0116 02:24:21.287487  991718 command_runner.go:130] > monitor_path = ""
	I0116 02:24:21.287498  991718 command_runner.go:130] > monitor_cgroup = ""
	I0116 02:24:21.287512  991718 command_runner.go:130] > monitor_exec_cgroup = ""
	I0116 02:24:21.287526  991718 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0116 02:24:21.287536  991718 command_runner.go:130] > # running containers
	I0116 02:24:21.287544  991718 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0116 02:24:21.287558  991718 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0116 02:24:21.287651  991718 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0116 02:24:21.287675  991718 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0116 02:24:21.287687  991718 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0116 02:24:21.287696  991718 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0116 02:24:21.287708  991718 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0116 02:24:21.287720  991718 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0116 02:24:21.287732  991718 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0116 02:24:21.287741  991718 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0116 02:24:21.287755  991718 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0116 02:24:21.287768  991718 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0116 02:24:21.287783  991718 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0116 02:24:21.287799  991718 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0116 02:24:21.287816  991718 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0116 02:24:21.287828  991718 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0116 02:24:21.287844  991718 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0116 02:24:21.287860  991718 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0116 02:24:21.287873  991718 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0116 02:24:21.287888  991718 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0116 02:24:21.287898  991718 command_runner.go:130] > # Example:
	I0116 02:24:21.287909  991718 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0116 02:24:21.287926  991718 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0116 02:24:21.287938  991718 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0116 02:24:21.287950  991718 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0116 02:24:21.287957  991718 command_runner.go:130] > # cpuset = 0
	I0116 02:24:21.287968  991718 command_runner.go:130] > # cpushares = "0-1"
	I0116 02:24:21.287977  991718 command_runner.go:130] > # Where:
	I0116 02:24:21.287986  991718 command_runner.go:130] > # The workload name is workload-type.
	I0116 02:24:21.288002  991718 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0116 02:24:21.288015  991718 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0116 02:24:21.288028  991718 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0116 02:24:21.288045  991718 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0116 02:24:21.288059  991718 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0116 02:24:21.288068  991718 command_runner.go:130] > # 
	I0116 02:24:21.288080  991718 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0116 02:24:21.288088  991718 command_runner.go:130] > #
	I0116 02:24:21.288099  991718 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0116 02:24:21.288113  991718 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0116 02:24:21.288127  991718 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0116 02:24:21.288143  991718 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0116 02:24:21.288157  991718 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0116 02:24:21.288166  991718 command_runner.go:130] > [crio.image]
	I0116 02:24:21.288177  991718 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0116 02:24:21.288189  991718 command_runner.go:130] > # default_transport = "docker://"
	I0116 02:24:21.288203  991718 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0116 02:24:21.288217  991718 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0116 02:24:21.288228  991718 command_runner.go:130] > # global_auth_file = ""
	I0116 02:24:21.288241  991718 command_runner.go:130] > # The image used to instantiate infra containers.
	I0116 02:24:21.288253  991718 command_runner.go:130] > # This option supports live configuration reload.
	I0116 02:24:21.288266  991718 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0116 02:24:21.288280  991718 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0116 02:24:21.288294  991718 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0116 02:24:21.288305  991718 command_runner.go:130] > # This option supports live configuration reload.
	I0116 02:24:21.288314  991718 command_runner.go:130] > # pause_image_auth_file = ""
	I0116 02:24:21.288328  991718 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0116 02:24:21.288342  991718 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0116 02:24:21.288356  991718 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0116 02:24:21.288375  991718 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0116 02:24:21.288386  991718 command_runner.go:130] > # pause_command = "/pause"
	I0116 02:24:21.288397  991718 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0116 02:24:21.288411  991718 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0116 02:24:21.288425  991718 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0116 02:24:21.288439  991718 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0116 02:24:21.288452  991718 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0116 02:24:21.288462  991718 command_runner.go:130] > # signature_policy = ""
	I0116 02:24:21.288476  991718 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0116 02:24:21.288494  991718 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0116 02:24:21.288505  991718 command_runner.go:130] > # changing them here.
	I0116 02:24:21.288516  991718 command_runner.go:130] > # insecure_registries = [
	I0116 02:24:21.288525  991718 command_runner.go:130] > # ]
	I0116 02:24:21.288542  991718 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0116 02:24:21.288553  991718 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0116 02:24:21.288561  991718 command_runner.go:130] > # image_volumes = "mkdir"
	I0116 02:24:21.288573  991718 command_runner.go:130] > # Temporary directory to use for storing big files
	I0116 02:24:21.288584  991718 command_runner.go:130] > # big_files_temporary_dir = ""
	I0116 02:24:21.288599  991718 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0116 02:24:21.288609  991718 command_runner.go:130] > # CNI plugins.
	I0116 02:24:21.288618  991718 command_runner.go:130] > [crio.network]
	I0116 02:24:21.288631  991718 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0116 02:24:21.288644  991718 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0116 02:24:21.288654  991718 command_runner.go:130] > # cni_default_network = ""
	I0116 02:24:21.288669  991718 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0116 02:24:21.288680  991718 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0116 02:24:21.288691  991718 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0116 02:24:21.288701  991718 command_runner.go:130] > # plugin_dirs = [
	I0116 02:24:21.288711  991718 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0116 02:24:21.288718  991718 command_runner.go:130] > # ]
	I0116 02:24:21.288732  991718 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0116 02:24:21.288742  991718 command_runner.go:130] > [crio.metrics]
	I0116 02:24:21.288753  991718 command_runner.go:130] > # Globally enable or disable metrics support.
	I0116 02:24:21.288761  991718 command_runner.go:130] > enable_metrics = true
	I0116 02:24:21.288773  991718 command_runner.go:130] > # Specify enabled metrics collectors.
	I0116 02:24:21.288785  991718 command_runner.go:130] > # Per default all metrics are enabled.
	I0116 02:24:21.288803  991718 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0116 02:24:21.288817  991718 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0116 02:24:21.288831  991718 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0116 02:24:21.288842  991718 command_runner.go:130] > # metrics_collectors = [
	I0116 02:24:21.288853  991718 command_runner.go:130] > # 	"operations",
	I0116 02:24:21.288862  991718 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0116 02:24:21.288871  991718 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0116 02:24:21.288881  991718 command_runner.go:130] > # 	"operations_errors",
	I0116 02:24:21.288890  991718 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0116 02:24:21.288901  991718 command_runner.go:130] > # 	"image_pulls_by_name",
	I0116 02:24:21.288910  991718 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0116 02:24:21.288921  991718 command_runner.go:130] > # 	"image_pulls_failures",
	I0116 02:24:21.288930  991718 command_runner.go:130] > # 	"image_pulls_successes",
	I0116 02:24:21.288941  991718 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0116 02:24:21.288948  991718 command_runner.go:130] > # 	"image_layer_reuse",
	I0116 02:24:21.288956  991718 command_runner.go:130] > # 	"containers_oom_total",
	I0116 02:24:21.288974  991718 command_runner.go:130] > # 	"containers_oom",
	I0116 02:24:21.288985  991718 command_runner.go:130] > # 	"processes_defunct",
	I0116 02:24:21.289000  991718 command_runner.go:130] > # 	"operations_total",
	I0116 02:24:21.289011  991718 command_runner.go:130] > # 	"operations_latency_seconds",
	I0116 02:24:21.289021  991718 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0116 02:24:21.289031  991718 command_runner.go:130] > # 	"operations_errors_total",
	I0116 02:24:21.289039  991718 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0116 02:24:21.289052  991718 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0116 02:24:21.289064  991718 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0116 02:24:21.289073  991718 command_runner.go:130] > # 	"image_pulls_success_total",
	I0116 02:24:21.289084  991718 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0116 02:24:21.289093  991718 command_runner.go:130] > # 	"containers_oom_count_total",
	I0116 02:24:21.289102  991718 command_runner.go:130] > # ]
	I0116 02:24:21.289114  991718 command_runner.go:130] > # The port on which the metrics server will listen.
	I0116 02:24:21.289124  991718 command_runner.go:130] > # metrics_port = 9090
	I0116 02:24:21.289140  991718 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0116 02:24:21.289150  991718 command_runner.go:130] > # metrics_socket = ""
	I0116 02:24:21.289162  991718 command_runner.go:130] > # The certificate for the secure metrics server.
	I0116 02:24:21.289174  991718 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0116 02:24:21.289188  991718 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0116 02:24:21.289203  991718 command_runner.go:130] > # certificate on any modification event.
	I0116 02:24:21.289213  991718 command_runner.go:130] > # metrics_cert = ""
	I0116 02:24:21.289223  991718 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0116 02:24:21.289235  991718 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0116 02:24:21.289245  991718 command_runner.go:130] > # metrics_key = ""
	I0116 02:24:21.289258  991718 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0116 02:24:21.289268  991718 command_runner.go:130] > [crio.tracing]
	I0116 02:24:21.289279  991718 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0116 02:24:21.289290  991718 command_runner.go:130] > # enable_tracing = false
	I0116 02:24:21.289302  991718 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0116 02:24:21.289310  991718 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0116 02:24:21.289323  991718 command_runner.go:130] > # Number of samples to collect per million spans.
	I0116 02:24:21.289335  991718 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0116 02:24:21.289349  991718 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0116 02:24:21.289359  991718 command_runner.go:130] > [crio.stats]
	I0116 02:24:21.289371  991718 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0116 02:24:21.289384  991718 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0116 02:24:21.289395  991718 command_runner.go:130] > # stats_collection_period = 0
	I0116 02:24:21.289506  991718 cni.go:84] Creating CNI manager for ""
	I0116 02:24:21.289518  991718 cni.go:136] 2 nodes found, recommending kindnet
	I0116 02:24:21.289530  991718 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0116 02:24:21.289559  991718 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.15 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-835787 NodeName:multinode-835787-m02 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.50"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0116 02:24:21.289730  991718 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-835787-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.50"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0116 02:24:21.289817  991718 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-835787-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-835787 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0116 02:24:21.289897  991718 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0116 02:24:21.299869  991718 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/binaries/v1.28.4': No such file or directory
	I0116 02:24:21.299939  991718 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.28.4: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.28.4': No such file or directory
	
	Initiating transfer...
	I0116 02:24:21.300016  991718 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.28.4
	I0116 02:24:21.309554  991718 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/17967-971255/.minikube/cache/linux/amd64/v1.28.4/kubelet
	I0116 02:24:21.309589  991718 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/17967-971255/.minikube/cache/linux/amd64/v1.28.4/kubeadm
	I0116 02:24:21.309565  991718 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl.sha256
	I0116 02:24:21.309749  991718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-971255/.minikube/cache/linux/amd64/v1.28.4/kubectl -> /var/lib/minikube/binaries/v1.28.4/kubectl
	I0116 02:24:21.309871  991718 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl
	I0116 02:24:21.314630  991718 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubectl': No such file or directory
	I0116 02:24:21.314926  991718 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubectl': No such file or directory
	I0116 02:24:21.314973  991718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/cache/linux/amd64/v1.28.4/kubectl --> /var/lib/minikube/binaries/v1.28.4/kubectl (49885184 bytes)
	I0116 02:24:21.962297  991718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-971255/.minikube/cache/linux/amd64/v1.28.4/kubeadm -> /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0116 02:24:21.962400  991718 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0116 02:24:21.967676  991718 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubeadm': No such file or directory
	I0116 02:24:21.967879  991718 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubeadm': No such file or directory
	I0116 02:24:21.967924  991718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/cache/linux/amd64/v1.28.4/kubeadm --> /var/lib/minikube/binaries/v1.28.4/kubeadm (49102848 bytes)
	I0116 02:24:28.833150  991718 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 02:24:28.848109  991718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-971255/.minikube/cache/linux/amd64/v1.28.4/kubelet -> /var/lib/minikube/binaries/v1.28.4/kubelet
	I0116 02:24:28.848224  991718 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet
	I0116 02:24:28.853474  991718 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubelet': No such file or directory
	I0116 02:24:28.853519  991718 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubelet': No such file or directory
	I0116 02:24:28.853550  991718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/cache/linux/amd64/v1.28.4/kubelet --> /var/lib/minikube/binaries/v1.28.4/kubelet (110850048 bytes)
	I0116 02:24:29.392502  991718 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0116 02:24:29.401898  991718 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0116 02:24:29.419106  991718 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0116 02:24:29.435914  991718 ssh_runner.go:195] Run: grep 192.168.39.50	control-plane.minikube.internal$ /etc/hosts
	I0116 02:24:29.439834  991718 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.50	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 02:24:29.452788  991718 host.go:66] Checking if "multinode-835787" exists ...
	I0116 02:24:29.453073  991718 config.go:182] Loaded profile config "multinode-835787": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 02:24:29.453258  991718 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 02:24:29.453319  991718 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:24:29.469627  991718 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40289
	I0116 02:24:29.470160  991718 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:24:29.470632  991718 main.go:141] libmachine: Using API Version  1
	I0116 02:24:29.470656  991718 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:24:29.470976  991718 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:24:29.471176  991718 main.go:141] libmachine: (multinode-835787) Calling .DriverName
	I0116 02:24:29.471328  991718 start.go:304] JoinCluster: &{Name:multinode-835787 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.4 ClusterName:multinode-835787 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.50 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.15 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDi
sks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 02:24:29.471449  991718 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0116 02:24:29.471467  991718 main.go:141] libmachine: (multinode-835787) Calling .GetSSHHostname
	I0116 02:24:29.474390  991718 main.go:141] libmachine: (multinode-835787) DBG | domain multinode-835787 has defined MAC address 52:54:00:20:87:3c in network mk-multinode-835787
	I0116 02:24:29.474798  991718 main.go:141] libmachine: (multinode-835787) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:87:3c", ip: ""} in network mk-multinode-835787: {Iface:virbr1 ExpiryTime:2024-01-16 03:23:03 +0000 UTC Type:0 Mac:52:54:00:20:87:3c Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:multinode-835787 Clientid:01:52:54:00:20:87:3c}
	I0116 02:24:29.474831  991718 main.go:141] libmachine: (multinode-835787) DBG | domain multinode-835787 has defined IP address 192.168.39.50 and MAC address 52:54:00:20:87:3c in network mk-multinode-835787
	I0116 02:24:29.474953  991718 main.go:141] libmachine: (multinode-835787) Calling .GetSSHPort
	I0116 02:24:29.475150  991718 main.go:141] libmachine: (multinode-835787) Calling .GetSSHKeyPath
	I0116 02:24:29.475288  991718 main.go:141] libmachine: (multinode-835787) Calling .GetSSHUsername
	I0116 02:24:29.475458  991718 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/multinode-835787/id_rsa Username:docker}
	I0116 02:24:29.643107  991718 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token rebcqk.2z5y506p5rj92cnh --discovery-token-ca-cert-hash sha256:2b2d63eda0b757db09586d44c0338fc83976b062d99727312f950d3846e4844b 
	I0116 02:24:29.643196  991718 start.go:325] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.39.15 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0116 02:24:29.643234  991718 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token rebcqk.2z5y506p5rj92cnh --discovery-token-ca-cert-hash sha256:2b2d63eda0b757db09586d44c0338fc83976b062d99727312f950d3846e4844b --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-835787-m02"
	I0116 02:24:29.694562  991718 command_runner.go:130] > [preflight] Running pre-flight checks
	I0116 02:24:29.855720  991718 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0116 02:24:29.855808  991718 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0116 02:24:29.894883  991718 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0116 02:24:29.894923  991718 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0116 02:24:29.894932  991718 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0116 02:24:30.024169  991718 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0116 02:24:32.543084  991718 command_runner.go:130] > This node has joined the cluster:
	I0116 02:24:32.543120  991718 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0116 02:24:32.543131  991718 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0116 02:24:32.543142  991718 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0116 02:24:32.545016  991718 command_runner.go:130] ! W0116 02:24:29.687675     816 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I0116 02:24:32.545049  991718 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0116 02:24:32.545079  991718 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token rebcqk.2z5y506p5rj92cnh --discovery-token-ca-cert-hash sha256:2b2d63eda0b757db09586d44c0338fc83976b062d99727312f950d3846e4844b --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-835787-m02": (2.90182651s)
	I0116 02:24:32.545107  991718 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0116 02:24:32.824891  991718 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0116 02:24:32.825036  991718 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=880ec6d67bbc7fe8882aaa6543d5a07427c973fd minikube.k8s.io/name=multinode-835787 minikube.k8s.io/updated_at=2024_01_16T02_24_32_0700 minikube.k8s.io/primary=false "-l minikube.k8s.io/primary!=true" --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:24:32.931306  991718 command_runner.go:130] > node/multinode-835787-m02 labeled
	I0116 02:24:32.932973  991718 start.go:306] JoinCluster complete in 3.461641442s
	I0116 02:24:32.933009  991718 cni.go:84] Creating CNI manager for ""
	I0116 02:24:32.933016  991718 cni.go:136] 2 nodes found, recommending kindnet
	I0116 02:24:32.933080  991718 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0116 02:24:32.938716  991718 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0116 02:24:32.938750  991718 command_runner.go:130] >   Size: 2685752   	Blocks: 5248       IO Block: 4096   regular file
	I0116 02:24:32.938764  991718 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I0116 02:24:32.938775  991718 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0116 02:24:32.938785  991718 command_runner.go:130] > Access: 2024-01-16 02:23:00.841750044 +0000
	I0116 02:24:32.938793  991718 command_runner.go:130] > Modify: 2023-12-28 22:53:36.000000000 +0000
	I0116 02:24:32.938800  991718 command_runner.go:130] > Change: 2024-01-16 02:22:59.017750044 +0000
	I0116 02:24:32.938807  991718 command_runner.go:130] >  Birth: -
	I0116 02:24:32.939125  991718 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0116 02:24:32.939143  991718 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0116 02:24:32.967360  991718 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0116 02:24:33.370602  991718 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0116 02:24:33.375312  991718 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0116 02:24:33.378391  991718 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0116 02:24:33.392062  991718 command_runner.go:130] > daemonset.apps/kindnet configured
	I0116 02:24:33.395482  991718 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17967-971255/kubeconfig
	I0116 02:24:33.395890  991718 kapi.go:59] client config for multinode-835787: &rest.Config{Host:"https://192.168.39.50:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17967-971255/.minikube/profiles/multinode-835787/client.crt", KeyFile:"/home/jenkins/minikube-integration/17967-971255/.minikube/profiles/multinode-835787/client.key", CAFile:"/home/jenkins/minikube-integration/17967-971255/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), N
extProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19be0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0116 02:24:33.396381  991718 round_trippers.go:463] GET https://192.168.39.50:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0116 02:24:33.396402  991718 round_trippers.go:469] Request Headers:
	I0116 02:24:33.396414  991718 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:24:33.396423  991718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:24:33.399215  991718 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:24:33.399238  991718 round_trippers.go:577] Response Headers:
	I0116 02:24:33.399246  991718 round_trippers.go:580]     Audit-Id: f92fdf0c-0489-4ec1-ab79-0ecc271865b8
	I0116 02:24:33.399254  991718 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:24:33.399262  991718 round_trippers.go:580]     Content-Type: application/json
	I0116 02:24:33.399268  991718 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:24:33.399275  991718 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:24:33.399284  991718 round_trippers.go:580]     Content-Length: 291
	I0116 02:24:33.399291  991718 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:24:33 GMT
	I0116 02:24:33.399325  991718 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"c3d1d02d-1d3d-4837-b3ba-04423f0d8104","resourceVersion":"456","creationTimestamp":"2024-01-16T02:23:32Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0116 02:24:33.399432  991718 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-835787" context rescaled to 1 replicas
	I0116 02:24:33.399467  991718 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.39.15 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0116 02:24:33.401171  991718 out.go:177] * Verifying Kubernetes components...
	I0116 02:24:33.402735  991718 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 02:24:33.433313  991718 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17967-971255/kubeconfig
	I0116 02:24:33.433711  991718 kapi.go:59] client config for multinode-835787: &rest.Config{Host:"https://192.168.39.50:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17967-971255/.minikube/profiles/multinode-835787/client.crt", KeyFile:"/home/jenkins/minikube-integration/17967-971255/.minikube/profiles/multinode-835787/client.key", CAFile:"/home/jenkins/minikube-integration/17967-971255/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), N
extProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19be0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0116 02:24:33.434087  991718 node_ready.go:35] waiting up to 6m0s for node "multinode-835787-m02" to be "Ready" ...
	I0116 02:24:33.434227  991718 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-835787-m02
	I0116 02:24:33.434237  991718 round_trippers.go:469] Request Headers:
	I0116 02:24:33.434247  991718 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:24:33.434257  991718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:24:33.439786  991718 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0116 02:24:33.439816  991718 round_trippers.go:577] Response Headers:
	I0116 02:24:33.439827  991718 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:24:33.439834  991718 round_trippers.go:580]     Content-Length: 4082
	I0116 02:24:33.439842  991718 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:24:33 GMT
	I0116 02:24:33.439850  991718 round_trippers.go:580]     Audit-Id: 70d0a819-4e54-4f01-90bf-1a02a0a7f610
	I0116 02:24:33.439859  991718 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:24:33.439874  991718 round_trippers.go:580]     Content-Type: application/json
	I0116 02:24:33.439882  991718 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:24:33.440008  991718 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-835787-m02","uid":"c989eb4c-b30d-4969-ac6c-b1c11d2d8a5f","resourceVersion":"513","creationTimestamp":"2024-01-16T02:24:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-835787-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-835787","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T02_24_32_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:24:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 3058 chars]
	I0116 02:24:33.934641  991718 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-835787-m02
	I0116 02:24:33.934668  991718 round_trippers.go:469] Request Headers:
	I0116 02:24:33.934677  991718 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:24:33.934683  991718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:24:33.939313  991718 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0116 02:24:33.939340  991718 round_trippers.go:577] Response Headers:
	I0116 02:24:33.939348  991718 round_trippers.go:580]     Audit-Id: a5694e58-584c-4cde-af70-24c0bf75cffd
	I0116 02:24:33.939354  991718 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:24:33.939359  991718 round_trippers.go:580]     Content-Type: application/json
	I0116 02:24:33.939364  991718 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:24:33.939369  991718 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:24:33.939380  991718 round_trippers.go:580]     Content-Length: 4082
	I0116 02:24:33.939387  991718 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:24:33 GMT
	I0116 02:24:33.939491  991718 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-835787-m02","uid":"c989eb4c-b30d-4969-ac6c-b1c11d2d8a5f","resourceVersion":"513","creationTimestamp":"2024-01-16T02:24:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-835787-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-835787","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T02_24_32_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:24:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 3058 chars]
	I0116 02:24:34.435125  991718 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-835787-m02
	I0116 02:24:34.435163  991718 round_trippers.go:469] Request Headers:
	I0116 02:24:34.435174  991718 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:24:34.435183  991718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:24:34.441408  991718 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0116 02:24:34.441440  991718 round_trippers.go:577] Response Headers:
	I0116 02:24:34.441448  991718 round_trippers.go:580]     Content-Type: application/json
	I0116 02:24:34.441454  991718 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:24:34.441459  991718 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:24:34.441467  991718 round_trippers.go:580]     Content-Length: 4082
	I0116 02:24:34.441476  991718 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:24:34 GMT
	I0116 02:24:34.441484  991718 round_trippers.go:580]     Audit-Id: f52cba61-24da-410c-8828-b1b9187615da
	I0116 02:24:34.441493  991718 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:24:34.442522  991718 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-835787-m02","uid":"c989eb4c-b30d-4969-ac6c-b1c11d2d8a5f","resourceVersion":"513","creationTimestamp":"2024-01-16T02:24:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-835787-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-835787","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T02_24_32_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:24:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 3058 chars]
	I0116 02:24:34.934888  991718 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-835787-m02
	I0116 02:24:34.934916  991718 round_trippers.go:469] Request Headers:
	I0116 02:24:34.934928  991718 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:24:34.934937  991718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:24:34.939400  991718 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0116 02:24:34.939440  991718 round_trippers.go:577] Response Headers:
	I0116 02:24:34.939450  991718 round_trippers.go:580]     Content-Length: 4082
	I0116 02:24:34.939458  991718 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:24:34 GMT
	I0116 02:24:34.939468  991718 round_trippers.go:580]     Audit-Id: 34abb23b-a74a-4d09-aca5-ca44f3e24255
	I0116 02:24:34.939475  991718 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:24:34.939483  991718 round_trippers.go:580]     Content-Type: application/json
	I0116 02:24:34.939491  991718 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:24:34.939500  991718 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:24:34.939617  991718 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-835787-m02","uid":"c989eb4c-b30d-4969-ac6c-b1c11d2d8a5f","resourceVersion":"513","creationTimestamp":"2024-01-16T02:24:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-835787-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-835787","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T02_24_32_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:24:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 3058 chars]
	I0116 02:24:35.435087  991718 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-835787-m02
	I0116 02:24:35.435117  991718 round_trippers.go:469] Request Headers:
	I0116 02:24:35.435125  991718 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:24:35.435131  991718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:24:35.438202  991718 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 02:24:35.438239  991718 round_trippers.go:577] Response Headers:
	I0116 02:24:35.438257  991718 round_trippers.go:580]     Content-Type: application/json
	I0116 02:24:35.438266  991718 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:24:35.438275  991718 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:24:35.438283  991718 round_trippers.go:580]     Content-Length: 4082
	I0116 02:24:35.438291  991718 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:24:35 GMT
	I0116 02:24:35.438298  991718 round_trippers.go:580]     Audit-Id: f4ee2d37-8f2d-4a98-b182-198e35177748
	I0116 02:24:35.438305  991718 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:24:35.438362  991718 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-835787-m02","uid":"c989eb4c-b30d-4969-ac6c-b1c11d2d8a5f","resourceVersion":"513","creationTimestamp":"2024-01-16T02:24:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-835787-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-835787","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T02_24_32_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:24:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 3058 chars]
	I0116 02:24:35.438610  991718 node_ready.go:58] node "multinode-835787-m02" has status "Ready":"False"
	I0116 02:24:35.934510  991718 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-835787-m02
	I0116 02:24:35.934546  991718 round_trippers.go:469] Request Headers:
	I0116 02:24:35.934558  991718 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:24:35.934567  991718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:24:35.937461  991718 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:24:35.937494  991718 round_trippers.go:577] Response Headers:
	I0116 02:24:35.937506  991718 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:24:35.937515  991718 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:24:35.937523  991718 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:24:35 GMT
	I0116 02:24:35.937532  991718 round_trippers.go:580]     Audit-Id: f0c492a8-b8e8-4f1f-9510-5af74703d55d
	I0116 02:24:35.937540  991718 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:24:35.937549  991718 round_trippers.go:580]     Content-Type: application/json
	I0116 02:24:35.937711  991718 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-835787-m02","uid":"c989eb4c-b30d-4969-ac6c-b1c11d2d8a5f","resourceVersion":"519","creationTimestamp":"2024-01-16T02:24:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-835787-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-835787","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T02_24_32_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:24:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 3167 chars]
	I0116 02:24:36.434389  991718 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-835787-m02
	I0116 02:24:36.434419  991718 round_trippers.go:469] Request Headers:
	I0116 02:24:36.434428  991718 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:24:36.434435  991718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:24:36.437740  991718 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 02:24:36.437769  991718 round_trippers.go:577] Response Headers:
	I0116 02:24:36.437781  991718 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:24:36.437790  991718 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:24:36.437816  991718 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:24:36 GMT
	I0116 02:24:36.437828  991718 round_trippers.go:580]     Audit-Id: 930e134e-ef74-4dc2-ad96-8e2e0265c02a
	I0116 02:24:36.437836  991718 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:24:36.437845  991718 round_trippers.go:580]     Content-Type: application/json
	I0116 02:24:36.438051  991718 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-835787-m02","uid":"c989eb4c-b30d-4969-ac6c-b1c11d2d8a5f","resourceVersion":"519","creationTimestamp":"2024-01-16T02:24:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-835787-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-835787","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T02_24_32_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:24:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 3167 chars]
	I0116 02:24:36.934669  991718 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-835787-m02
	I0116 02:24:36.934702  991718 round_trippers.go:469] Request Headers:
	I0116 02:24:36.934716  991718 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:24:36.934726  991718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:24:36.937435  991718 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:24:36.937468  991718 round_trippers.go:577] Response Headers:
	I0116 02:24:36.937478  991718 round_trippers.go:580]     Audit-Id: 4dff0ab2-4213-4854-adc8-ec47c6dbd67f
	I0116 02:24:36.937487  991718 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:24:36.937495  991718 round_trippers.go:580]     Content-Type: application/json
	I0116 02:24:36.937503  991718 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:24:36.937512  991718 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:24:36.937520  991718 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:24:36 GMT
	I0116 02:24:36.937689  991718 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-835787-m02","uid":"c989eb4c-b30d-4969-ac6c-b1c11d2d8a5f","resourceVersion":"519","creationTimestamp":"2024-01-16T02:24:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-835787-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-835787","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T02_24_32_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:24:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 3167 chars]
	I0116 02:24:37.434685  991718 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-835787-m02
	I0116 02:24:37.434716  991718 round_trippers.go:469] Request Headers:
	I0116 02:24:37.434730  991718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:24:37.434741  991718 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:24:37.437608  991718 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:24:37.437633  991718 round_trippers.go:577] Response Headers:
	I0116 02:24:37.437640  991718 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:24:37.437646  991718 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:24:37 GMT
	I0116 02:24:37.437651  991718 round_trippers.go:580]     Audit-Id: 33e4c9fb-3796-44a5-875a-ee057054a420
	I0116 02:24:37.437657  991718 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:24:37.437663  991718 round_trippers.go:580]     Content-Type: application/json
	I0116 02:24:37.437670  991718 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:24:37.437888  991718 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-835787-m02","uid":"c989eb4c-b30d-4969-ac6c-b1c11d2d8a5f","resourceVersion":"519","creationTimestamp":"2024-01-16T02:24:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-835787-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-835787","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T02_24_32_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:24:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 3167 chars]
	I0116 02:24:37.934571  991718 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-835787-m02
	I0116 02:24:37.934599  991718 round_trippers.go:469] Request Headers:
	I0116 02:24:37.934608  991718 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:24:37.934614  991718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:24:37.937668  991718 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 02:24:37.937699  991718 round_trippers.go:577] Response Headers:
	I0116 02:24:37.937711  991718 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:24:37 GMT
	I0116 02:24:37.937720  991718 round_trippers.go:580]     Audit-Id: bf25f325-957c-40d0-a990-93326e921af4
	I0116 02:24:37.937729  991718 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:24:37.937737  991718 round_trippers.go:580]     Content-Type: application/json
	I0116 02:24:37.937745  991718 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:24:37.937799  991718 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:24:37.938267  991718 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-835787-m02","uid":"c989eb4c-b30d-4969-ac6c-b1c11d2d8a5f","resourceVersion":"519","creationTimestamp":"2024-01-16T02:24:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-835787-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-835787","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T02_24_32_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:24:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 3167 chars]
	I0116 02:24:37.938672  991718 node_ready.go:58] node "multinode-835787-m02" has status "Ready":"False"
	I0116 02:24:38.434980  991718 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-835787-m02
	I0116 02:24:38.435007  991718 round_trippers.go:469] Request Headers:
	I0116 02:24:38.435016  991718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:24:38.435023  991718 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:24:38.438740  991718 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 02:24:38.438770  991718 round_trippers.go:577] Response Headers:
	I0116 02:24:38.438777  991718 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:24:38.438783  991718 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:24:38.438788  991718 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:24:38 GMT
	I0116 02:24:38.438793  991718 round_trippers.go:580]     Audit-Id: a70e5575-2bd8-406d-b40c-49756b9ab914
	I0116 02:24:38.438798  991718 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:24:38.438806  991718 round_trippers.go:580]     Content-Type: application/json
	I0116 02:24:38.439462  991718 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-835787-m02","uid":"c989eb4c-b30d-4969-ac6c-b1c11d2d8a5f","resourceVersion":"519","creationTimestamp":"2024-01-16T02:24:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-835787-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-835787","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T02_24_32_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:24:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 3167 chars]
	I0116 02:24:38.935097  991718 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-835787-m02
	I0116 02:24:38.935131  991718 round_trippers.go:469] Request Headers:
	I0116 02:24:38.935142  991718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:24:38.935151  991718 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:24:38.938221  991718 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 02:24:38.938247  991718 round_trippers.go:577] Response Headers:
	I0116 02:24:38.938255  991718 round_trippers.go:580]     Audit-Id: a72a48da-a45d-45af-b4f8-17b1eaadccb3
	I0116 02:24:38.938261  991718 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:24:38.938266  991718 round_trippers.go:580]     Content-Type: application/json
	I0116 02:24:38.938271  991718 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:24:38.938276  991718 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:24:38.938281  991718 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:24:38 GMT
	I0116 02:24:38.938482  991718 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-835787-m02","uid":"c989eb4c-b30d-4969-ac6c-b1c11d2d8a5f","resourceVersion":"519","creationTimestamp":"2024-01-16T02:24:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-835787-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-835787","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T02_24_32_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:24:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 3167 chars]
	I0116 02:24:39.435240  991718 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-835787-m02
	I0116 02:24:39.435271  991718 round_trippers.go:469] Request Headers:
	I0116 02:24:39.435284  991718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:24:39.435291  991718 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:24:39.438591  991718 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 02:24:39.438629  991718 round_trippers.go:577] Response Headers:
	I0116 02:24:39.438640  991718 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:24:39.438649  991718 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:24:39 GMT
	I0116 02:24:39.438657  991718 round_trippers.go:580]     Audit-Id: 1eedddf5-e280-4291-8740-90992a6eac45
	I0116 02:24:39.438665  991718 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:24:39.438673  991718 round_trippers.go:580]     Content-Type: application/json
	I0116 02:24:39.438681  991718 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:24:39.438832  991718 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-835787-m02","uid":"c989eb4c-b30d-4969-ac6c-b1c11d2d8a5f","resourceVersion":"519","creationTimestamp":"2024-01-16T02:24:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-835787-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-835787","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T02_24_32_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:24:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 3167 chars]
	I0116 02:24:39.935058  991718 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-835787-m02
	I0116 02:24:39.935080  991718 round_trippers.go:469] Request Headers:
	I0116 02:24:39.935088  991718 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:24:39.935094  991718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:24:39.937818  991718 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:24:39.937848  991718 round_trippers.go:577] Response Headers:
	I0116 02:24:39.937859  991718 round_trippers.go:580]     Audit-Id: 5bdde56c-59ec-4de6-a74f-4b22ecfcb563
	I0116 02:24:39.937870  991718 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:24:39.937876  991718 round_trippers.go:580]     Content-Type: application/json
	I0116 02:24:39.937884  991718 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:24:39.937891  991718 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:24:39.937906  991718 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:24:39 GMT
	I0116 02:24:39.938315  991718 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-835787-m02","uid":"c989eb4c-b30d-4969-ac6c-b1c11d2d8a5f","resourceVersion":"519","creationTimestamp":"2024-01-16T02:24:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-835787-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-835787","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T02_24_32_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:24:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 3167 chars]
	I0116 02:24:40.435112  991718 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-835787-m02
	I0116 02:24:40.435149  991718 round_trippers.go:469] Request Headers:
	I0116 02:24:40.435158  991718 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:24:40.435164  991718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:24:40.438149  991718 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:24:40.438175  991718 round_trippers.go:577] Response Headers:
	I0116 02:24:40.438183  991718 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:24:40.438188  991718 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:24:40.438194  991718 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:24:40 GMT
	I0116 02:24:40.438202  991718 round_trippers.go:580]     Audit-Id: e3a049bd-18c2-4a45-9e9c-7ce5106a0138
	I0116 02:24:40.438207  991718 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:24:40.438212  991718 round_trippers.go:580]     Content-Type: application/json
	I0116 02:24:40.438422  991718 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-835787-m02","uid":"c989eb4c-b30d-4969-ac6c-b1c11d2d8a5f","resourceVersion":"536","creationTimestamp":"2024-01-16T02:24:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-835787-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-835787","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T02_24_32_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:24:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 3253 chars]
	I0116 02:24:40.438812  991718 node_ready.go:49] node "multinode-835787-m02" has status "Ready":"True"
	I0116 02:24:40.438843  991718 node_ready.go:38] duration metric: took 7.004728924s waiting for node "multinode-835787-m02" to be "Ready" ...
	I0116 02:24:40.438859  991718 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 02:24:40.438951  991718 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods
	I0116 02:24:40.438959  991718 round_trippers.go:469] Request Headers:
	I0116 02:24:40.438966  991718 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:24:40.438972  991718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:24:40.445772  991718 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0116 02:24:40.445794  991718 round_trippers.go:577] Response Headers:
	I0116 02:24:40.445811  991718 round_trippers.go:580]     Audit-Id: 4de82d86-4f73-4d59-82de-42128e645e5d
	I0116 02:24:40.445818  991718 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:24:40.445824  991718 round_trippers.go:580]     Content-Type: application/json
	I0116 02:24:40.445829  991718 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:24:40.445838  991718 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:24:40.445845  991718 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:24:40 GMT
	I0116 02:24:40.448677  991718 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"536"},"items":[{"metadata":{"name":"coredns-5dd5756b68-965sn","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"a0898f09-1a64-4beb-bfbf-de15f2e07038","resourceVersion":"452","creationTimestamp":"2024-01-16T02:23:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a765b12d-1df2-4d20-a0f7-7471371f2fe7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:23:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a765b12d-1df2-4d20-a0f7-7471371f2fe7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 67324 chars]
	I0116 02:24:40.450848  991718 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-965sn" in "kube-system" namespace to be "Ready" ...
	I0116 02:24:40.450936  991718 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-965sn
	I0116 02:24:40.450944  991718 round_trippers.go:469] Request Headers:
	I0116 02:24:40.450952  991718 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:24:40.450958  991718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:24:40.453478  991718 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:24:40.453498  991718 round_trippers.go:577] Response Headers:
	I0116 02:24:40.453508  991718 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:24:40.453520  991718 round_trippers.go:580]     Content-Type: application/json
	I0116 02:24:40.453526  991718 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:24:40.453534  991718 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:24:40.453542  991718 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:24:40 GMT
	I0116 02:24:40.453552  991718 round_trippers.go:580]     Audit-Id: f89d3a6a-0e86-4b35-8e5f-7508b487e184
	I0116 02:24:40.453687  991718 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-965sn","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"a0898f09-1a64-4beb-bfbf-de15f2e07038","resourceVersion":"452","creationTimestamp":"2024-01-16T02:23:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a765b12d-1df2-4d20-a0f7-7471371f2fe7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:23:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a765b12d-1df2-4d20-a0f7-7471371f2fe7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6264 chars]
	I0116 02:24:40.454204  991718 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-835787
	I0116 02:24:40.454221  991718 round_trippers.go:469] Request Headers:
	I0116 02:24:40.454232  991718 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:24:40.454241  991718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:24:40.456649  991718 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:24:40.456666  991718 round_trippers.go:577] Response Headers:
	I0116 02:24:40.456678  991718 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:24:40.456684  991718 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:24:40 GMT
	I0116 02:24:40.456689  991718 round_trippers.go:580]     Audit-Id: f7b168b0-f983-4091-a70a-97515eb1b2d0
	I0116 02:24:40.456694  991718 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:24:40.456705  991718 round_trippers.go:580]     Content-Type: application/json
	I0116 02:24:40.456713  991718 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:24:40.457278  991718 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-835787","uid":"7ae74749-584e-4cbc-92c4-0e5f2539761e","resourceVersion":"427","creationTimestamp":"2024-01-16T02:23:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-835787","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-835787","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_23_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:23:29Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I0116 02:24:40.457582  991718 pod_ready.go:92] pod "coredns-5dd5756b68-965sn" in "kube-system" namespace has status "Ready":"True"
	I0116 02:24:40.457597  991718 pod_ready.go:81] duration metric: took 6.724053ms waiting for pod "coredns-5dd5756b68-965sn" in "kube-system" namespace to be "Ready" ...
	I0116 02:24:40.457605  991718 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-835787" in "kube-system" namespace to be "Ready" ...
	I0116 02:24:40.457660  991718 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-835787
	I0116 02:24:40.457697  991718 round_trippers.go:469] Request Headers:
	I0116 02:24:40.457707  991718 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:24:40.457716  991718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:24:40.460471  991718 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:24:40.460493  991718 round_trippers.go:577] Response Headers:
	I0116 02:24:40.460502  991718 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:24:40.460510  991718 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:24:40 GMT
	I0116 02:24:40.460518  991718 round_trippers.go:580]     Audit-Id: e9a202f8-b74d-4416-bf68-da919cf2ecec
	I0116 02:24:40.460525  991718 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:24:40.460533  991718 round_trippers.go:580]     Content-Type: application/json
	I0116 02:24:40.460544  991718 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:24:40.460655  991718 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-835787","namespace":"kube-system","uid":"ccb51de1-d565-42b0-bd30-9b1acb1c725d","resourceVersion":"443","creationTimestamp":"2024-01-16T02:23:33Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.50:2379","kubernetes.io/config.hash":"108085f55363e386b9f9c083ac579444","kubernetes.io/config.mirror":"108085f55363e386b9f9c083ac579444","kubernetes.io/config.seen":"2024-01-16T02:23:33.032941198Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-835787","uid":"7ae74749-584e-4cbc-92c4-0e5f2539761e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:23:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 5843 chars]
	I0116 02:24:40.461030  991718 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-835787
	I0116 02:24:40.461046  991718 round_trippers.go:469] Request Headers:
	I0116 02:24:40.461056  991718 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:24:40.461064  991718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:24:40.463501  991718 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:24:40.463520  991718 round_trippers.go:577] Response Headers:
	I0116 02:24:40.463529  991718 round_trippers.go:580]     Audit-Id: 1165a683-18ad-499d-b926-4e3b67d18898
	I0116 02:24:40.463534  991718 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:24:40.463539  991718 round_trippers.go:580]     Content-Type: application/json
	I0116 02:24:40.463544  991718 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:24:40.463549  991718 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:24:40.463554  991718 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:24:40 GMT
	I0116 02:24:40.463753  991718 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-835787","uid":"7ae74749-584e-4cbc-92c4-0e5f2539761e","resourceVersion":"427","creationTimestamp":"2024-01-16T02:23:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-835787","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-835787","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_23_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:23:29Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I0116 02:24:40.464183  991718 pod_ready.go:92] pod "etcd-multinode-835787" in "kube-system" namespace has status "Ready":"True"
	I0116 02:24:40.464204  991718 pod_ready.go:81] duration metric: took 6.586756ms waiting for pod "etcd-multinode-835787" in "kube-system" namespace to be "Ready" ...
	I0116 02:24:40.464228  991718 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-835787" in "kube-system" namespace to be "Ready" ...
	I0116 02:24:40.464312  991718 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-835787
	I0116 02:24:40.464320  991718 round_trippers.go:469] Request Headers:
	I0116 02:24:40.464331  991718 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:24:40.464342  991718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:24:40.466472  991718 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:24:40.466490  991718 round_trippers.go:577] Response Headers:
	I0116 02:24:40.466499  991718 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:24:40.466506  991718 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:24:40 GMT
	I0116 02:24:40.466514  991718 round_trippers.go:580]     Audit-Id: a14e92b1-e5ec-4799-9a01-4106eb2b95b3
	I0116 02:24:40.466522  991718 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:24:40.466530  991718 round_trippers.go:580]     Content-Type: application/json
	I0116 02:24:40.466544  991718 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:24:40.466739  991718 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-835787","namespace":"kube-system","uid":"9c26db11-7208-4540-8a73-407a6edd3a0b","resourceVersion":"444","creationTimestamp":"2024-01-16T02:23:33Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.50:8443","kubernetes.io/config.hash":"b27880b6b81ca11dc023b4901941ff6f","kubernetes.io/config.mirror":"b27880b6b81ca11dc023b4901941ff6f","kubernetes.io/config.seen":"2024-01-16T02:23:33.032945135Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-835787","uid":"7ae74749-584e-4cbc-92c4-0e5f2539761e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:23:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7380 chars]
	I0116 02:24:40.467270  991718 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-835787
	I0116 02:24:40.467291  991718 round_trippers.go:469] Request Headers:
	I0116 02:24:40.467301  991718 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:24:40.467309  991718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:24:40.470235  991718 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:24:40.470257  991718 round_trippers.go:577] Response Headers:
	I0116 02:24:40.470266  991718 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:24:40.470274  991718 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:24:40.470283  991718 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:24:40 GMT
	I0116 02:24:40.470293  991718 round_trippers.go:580]     Audit-Id: 2954216b-b9aa-4e9a-a5d7-86ebf881badd
	I0116 02:24:40.470305  991718 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:24:40.470313  991718 round_trippers.go:580]     Content-Type: application/json
	I0116 02:24:40.470500  991718 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-835787","uid":"7ae74749-584e-4cbc-92c4-0e5f2539761e","resourceVersion":"427","creationTimestamp":"2024-01-16T02:23:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-835787","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-835787","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_23_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:23:29Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I0116 02:24:40.470819  991718 pod_ready.go:92] pod "kube-apiserver-multinode-835787" in "kube-system" namespace has status "Ready":"True"
	I0116 02:24:40.470838  991718 pod_ready.go:81] duration metric: took 6.599626ms waiting for pod "kube-apiserver-multinode-835787" in "kube-system" namespace to be "Ready" ...
	I0116 02:24:40.470851  991718 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-835787" in "kube-system" namespace to be "Ready" ...
	I0116 02:24:40.470911  991718 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-835787
	I0116 02:24:40.470920  991718 round_trippers.go:469] Request Headers:
	I0116 02:24:40.470931  991718 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:24:40.470942  991718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:24:40.473223  991718 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:24:40.473243  991718 round_trippers.go:577] Response Headers:
	I0116 02:24:40.473251  991718 round_trippers.go:580]     Audit-Id: 93a9e3ca-cb1f-4642-bec0-78b193e2fb00
	I0116 02:24:40.473259  991718 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:24:40.473268  991718 round_trippers.go:580]     Content-Type: application/json
	I0116 02:24:40.473280  991718 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:24:40.473291  991718 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:24:40.473303  991718 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:24:40 GMT
	I0116 02:24:40.473456  991718 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-835787","namespace":"kube-system","uid":"daf9e312-54ad-4a4e-b334-9b84e55f8fef","resourceVersion":"445","creationTimestamp":"2024-01-16T02:23:33Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"6adb137abb6e7ac4dcf8e50e41a3773b","kubernetes.io/config.mirror":"6adb137abb6e7ac4dcf8e50e41a3773b","kubernetes.io/config.seen":"2024-01-16T02:23:33.032946146Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-835787","uid":"7ae74749-584e-4cbc-92c4-0e5f2539761e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:23:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6950 chars]
	I0116 02:24:40.473913  991718 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-835787
	I0116 02:24:40.473929  991718 round_trippers.go:469] Request Headers:
	I0116 02:24:40.473939  991718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:24:40.473948  991718 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:24:40.475730  991718 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0116 02:24:40.475755  991718 round_trippers.go:577] Response Headers:
	I0116 02:24:40.475766  991718 round_trippers.go:580]     Audit-Id: f6358784-ab14-476b-9a04-09a8e6640cc1
	I0116 02:24:40.475778  991718 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:24:40.475790  991718 round_trippers.go:580]     Content-Type: application/json
	I0116 02:24:40.475799  991718 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:24:40.475811  991718 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:24:40.475833  991718 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:24:40 GMT
	I0116 02:24:40.476007  991718 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-835787","uid":"7ae74749-584e-4cbc-92c4-0e5f2539761e","resourceVersion":"427","creationTimestamp":"2024-01-16T02:23:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-835787","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-835787","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_23_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:23:29Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I0116 02:24:40.476278  991718 pod_ready.go:92] pod "kube-controller-manager-multinode-835787" in "kube-system" namespace has status "Ready":"True"
	I0116 02:24:40.476293  991718 pod_ready.go:81] duration metric: took 5.434241ms waiting for pod "kube-controller-manager-multinode-835787" in "kube-system" namespace to be "Ready" ...
	I0116 02:24:40.476302  991718 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-gbvc2" in "kube-system" namespace to be "Ready" ...
	I0116 02:24:40.635732  991718 request.go:629] Waited for 159.367436ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gbvc2
	I0116 02:24:40.635831  991718 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gbvc2
	I0116 02:24:40.635842  991718 round_trippers.go:469] Request Headers:
	I0116 02:24:40.635853  991718 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:24:40.635865  991718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:24:40.638855  991718 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:24:40.638887  991718 round_trippers.go:577] Response Headers:
	I0116 02:24:40.638896  991718 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:24:40.638904  991718 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:24:40.638911  991718 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:24:40 GMT
	I0116 02:24:40.638919  991718 round_trippers.go:580]     Audit-Id: ea1a6f6b-b296-4394-b96e-f5165d25eecc
	I0116 02:24:40.638930  991718 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:24:40.638941  991718 round_trippers.go:580]     Content-Type: application/json
	I0116 02:24:40.639153  991718 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-gbvc2","generateName":"kube-proxy-","namespace":"kube-system","uid":"74d63696-cb46-484d-937b-8883e6f1df06","resourceVersion":"416","creationTimestamp":"2024-01-16T02:23:45Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"c1b2dbfd-aa15-48f4-b587-dbae275fb5c5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:23:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c1b2dbfd-aa15-48f4-b587-dbae275fb5c5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5514 chars]
	I0116 02:24:40.836103  991718 request.go:629] Waited for 196.446478ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.50:8443/api/v1/nodes/multinode-835787
	I0116 02:24:40.836219  991718 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-835787
	I0116 02:24:40.836229  991718 round_trippers.go:469] Request Headers:
	I0116 02:24:40.836243  991718 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:24:40.836254  991718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:24:40.840256  991718 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 02:24:40.840295  991718 round_trippers.go:577] Response Headers:
	I0116 02:24:40.840307  991718 round_trippers.go:580]     Audit-Id: da066df1-61c1-40ca-b15a-96f609c5100a
	I0116 02:24:40.840316  991718 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:24:40.840324  991718 round_trippers.go:580]     Content-Type: application/json
	I0116 02:24:40.840331  991718 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:24:40.840347  991718 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:24:40.840353  991718 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:24:40 GMT
	I0116 02:24:40.840513  991718 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-835787","uid":"7ae74749-584e-4cbc-92c4-0e5f2539761e","resourceVersion":"427","creationTimestamp":"2024-01-16T02:23:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-835787","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-835787","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_23_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:23:29Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I0116 02:24:40.840866  991718 pod_ready.go:92] pod "kube-proxy-gbvc2" in "kube-system" namespace has status "Ready":"True"
	I0116 02:24:40.840886  991718 pod_ready.go:81] duration metric: took 364.577897ms waiting for pod "kube-proxy-gbvc2" in "kube-system" namespace to be "Ready" ...
	I0116 02:24:40.840896  991718 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hxx8p" in "kube-system" namespace to be "Ready" ...
	I0116 02:24:41.035995  991718 request.go:629] Waited for 194.992685ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hxx8p
	I0116 02:24:41.036076  991718 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hxx8p
	I0116 02:24:41.036082  991718 round_trippers.go:469] Request Headers:
	I0116 02:24:41.036090  991718 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:24:41.036096  991718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:24:41.039241  991718 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 02:24:41.039279  991718 round_trippers.go:577] Response Headers:
	I0116 02:24:41.039287  991718 round_trippers.go:580]     Content-Type: application/json
	I0116 02:24:41.039293  991718 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:24:41.039298  991718 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:24:41.039303  991718 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:24:41 GMT
	I0116 02:24:41.039308  991718 round_trippers.go:580]     Audit-Id: e99829bc-33be-4dbe-ba62-3fb7ba408a88
	I0116 02:24:41.039324  991718 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:24:41.039520  991718 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-hxx8p","generateName":"kube-proxy-","namespace":"kube-system","uid":"9c35aa68-14ac-41e1-81f8-8fdb0c48d9f1","resourceVersion":"525","creationTimestamp":"2024-01-16T02:24:32Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"c1b2dbfd-aa15-48f4-b587-dbae275fb5c5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:24:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c1b2dbfd-aa15-48f4-b587-dbae275fb5c5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5522 chars]
	I0116 02:24:41.235328  991718 request.go:629] Waited for 195.327032ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.50:8443/api/v1/nodes/multinode-835787-m02
	I0116 02:24:41.235423  991718 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-835787-m02
	I0116 02:24:41.235429  991718 round_trippers.go:469] Request Headers:
	I0116 02:24:41.235437  991718 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:24:41.235446  991718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:24:41.238158  991718 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:24:41.238186  991718 round_trippers.go:577] Response Headers:
	I0116 02:24:41.238194  991718 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:24:41.238200  991718 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:24:41 GMT
	I0116 02:24:41.238205  991718 round_trippers.go:580]     Audit-Id: 87cb4f9d-588a-400c-a299-6dd4e5e755a1
	I0116 02:24:41.238210  991718 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:24:41.238219  991718 round_trippers.go:580]     Content-Type: application/json
	I0116 02:24:41.238224  991718 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:24:41.238401  991718 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-835787-m02","uid":"c989eb4c-b30d-4969-ac6c-b1c11d2d8a5f","resourceVersion":"537","creationTimestamp":"2024-01-16T02:24:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-835787-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-835787","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T02_24_32_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:24:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 3133 chars]
	I0116 02:24:41.238665  991718 pod_ready.go:92] pod "kube-proxy-hxx8p" in "kube-system" namespace has status "Ready":"True"
	I0116 02:24:41.238682  991718 pod_ready.go:81] duration metric: took 397.778811ms waiting for pod "kube-proxy-hxx8p" in "kube-system" namespace to be "Ready" ...
	I0116 02:24:41.238692  991718 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-835787" in "kube-system" namespace to be "Ready" ...
	I0116 02:24:41.435899  991718 request.go:629] Waited for 197.118256ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-835787
	I0116 02:24:41.436008  991718 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-835787
	I0116 02:24:41.436014  991718 round_trippers.go:469] Request Headers:
	I0116 02:24:41.436021  991718 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:24:41.436028  991718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:24:41.439438  991718 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 02:24:41.439461  991718 round_trippers.go:577] Response Headers:
	I0116 02:24:41.439471  991718 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:24:41.439480  991718 round_trippers.go:580]     Content-Type: application/json
	I0116 02:24:41.439488  991718 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:24:41.439496  991718 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:24:41.439505  991718 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:24:41 GMT
	I0116 02:24:41.439512  991718 round_trippers.go:580]     Audit-Id: 6e289af7-91e0-4c18-8b24-b2c29429e341
	I0116 02:24:41.440132  991718 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-835787","namespace":"kube-system","uid":"7b9c28cc-6e78-413a-af72-511714d2462e","resourceVersion":"442","creationTimestamp":"2024-01-16T02:23:33Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"230f2dad53142209ac2ae48ed27aa7b4","kubernetes.io/config.mirror":"230f2dad53142209ac2ae48ed27aa7b4","kubernetes.io/config.seen":"2024-01-16T02:23:33.032947019Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-835787","uid":"7ae74749-584e-4cbc-92c4-0e5f2539761e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:23:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4680 chars]
	I0116 02:24:41.635951  991718 request.go:629] Waited for 195.418358ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.50:8443/api/v1/nodes/multinode-835787
	I0116 02:24:41.636044  991718 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-835787
	I0116 02:24:41.636050  991718 round_trippers.go:469] Request Headers:
	I0116 02:24:41.636058  991718 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:24:41.636064  991718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:24:41.639191  991718 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 02:24:41.639226  991718 round_trippers.go:577] Response Headers:
	I0116 02:24:41.639237  991718 round_trippers.go:580]     Content-Type: application/json
	I0116 02:24:41.639245  991718 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:24:41.639251  991718 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:24:41.639258  991718 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:24:41 GMT
	I0116 02:24:41.639266  991718 round_trippers.go:580]     Audit-Id: fad52e0b-b9dc-4fe9-80d0-4e36706ebbb5
	I0116 02:24:41.639274  991718 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:24:41.639662  991718 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-835787","uid":"7ae74749-584e-4cbc-92c4-0e5f2539761e","resourceVersion":"427","creationTimestamp":"2024-01-16T02:23:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-835787","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-835787","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_23_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:23:29Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I0116 02:24:41.640006  991718 pod_ready.go:92] pod "kube-scheduler-multinode-835787" in "kube-system" namespace has status "Ready":"True"
	I0116 02:24:41.640026  991718 pod_ready.go:81] duration metric: took 401.325487ms waiting for pod "kube-scheduler-multinode-835787" in "kube-system" namespace to be "Ready" ...
	I0116 02:24:41.640040  991718 pod_ready.go:38] duration metric: took 1.201149673s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 02:24:41.640063  991718 system_svc.go:44] waiting for kubelet service to be running ....
	I0116 02:24:41.640116  991718 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 02:24:41.655970  991718 system_svc.go:56] duration metric: took 15.896993ms WaitForService to wait for kubelet.
	I0116 02:24:41.656001  991718 kubeadm.go:581] duration metric: took 8.256496805s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0116 02:24:41.656021  991718 node_conditions.go:102] verifying NodePressure condition ...
	I0116 02:24:41.835459  991718 request.go:629] Waited for 179.35704ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.50:8443/api/v1/nodes
	I0116 02:24:41.835532  991718 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes
	I0116 02:24:41.835537  991718 round_trippers.go:469] Request Headers:
	I0116 02:24:41.835547  991718 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:24:41.835555  991718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:24:41.839203  991718 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 02:24:41.839242  991718 round_trippers.go:577] Response Headers:
	I0116 02:24:41.839253  991718 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:24:41.839261  991718 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:24:41 GMT
	I0116 02:24:41.839269  991718 round_trippers.go:580]     Audit-Id: 259f79d0-c9be-415c-be0c-f16e08fc75d3
	I0116 02:24:41.839277  991718 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:24:41.839285  991718 round_trippers.go:580]     Content-Type: application/json
	I0116 02:24:41.839292  991718 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:24:41.839491  991718 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"540"},"items":[{"metadata":{"name":"multinode-835787","uid":"7ae74749-584e-4cbc-92c4-0e5f2539761e","resourceVersion":"427","creationTimestamp":"2024-01-16T02:23:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-835787","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-835787","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_23_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 10076 chars]
	I0116 02:24:41.839963  991718 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 02:24:41.839981  991718 node_conditions.go:123] node cpu capacity is 2
	I0116 02:24:41.839992  991718 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 02:24:41.839996  991718 node_conditions.go:123] node cpu capacity is 2
	I0116 02:24:41.840000  991718 node_conditions.go:105] duration metric: took 183.975039ms to run NodePressure ...
	I0116 02:24:41.840014  991718 start.go:228] waiting for startup goroutines ...
	I0116 02:24:41.840039  991718 start.go:242] writing updated cluster config ...
	I0116 02:24:41.840336  991718 ssh_runner.go:195] Run: rm -f paused
	I0116 02:24:41.892771  991718 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I0116 02:24:41.895466  991718 out.go:177] * Done! kubectl is now configured to use "multinode-835787" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	-- Journal begins at Tue 2024-01-16 02:22:59 UTC, ends at Tue 2024-01-16 02:24:49 UTC. --
	Jan 16 02:24:49 multinode-835787 crio[718]: time="2024-01-16 02:24:49.108705948Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=d3f5ddb6-5f3c-4260-8e80-6d4307ea3028 name=/runtime.v1.RuntimeService/Version
	Jan 16 02:24:49 multinode-835787 crio[718]: time="2024-01-16 02:24:49.110019627Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=697bfc02-10e7-48f4-906a-6b10d7257c12 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 02:24:49 multinode-835787 crio[718]: time="2024-01-16 02:24:49.110639977Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705371889110623881,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125543,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=697bfc02-10e7-48f4-906a-6b10d7257c12 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 02:24:49 multinode-835787 crio[718]: time="2024-01-16 02:24:49.111291254Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=96925bf3-e705-49ad-b88b-05ff7da112df name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 02:24:49 multinode-835787 crio[718]: time="2024-01-16 02:24:49.111342162Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=96925bf3-e705-49ad-b88b-05ff7da112df name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 02:24:49 multinode-835787 crio[718]: time="2024-01-16 02:24:49.111602170Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7051212bc9bf2eadc16f8ce9d8cfc9a837898fabcc2971ef94f983f26d24566d,PodSandboxId:9f57bb2d2760223b871d92283fddc0362f3f614e9b2867f1f11f3863f50c2a25,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1705371884685084759,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-f6p29,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: de7231c8-3c4b-4fe1-a720-0e2b00c3881f,},Annotations:map[string]string{io.kubernetes.container.hash: c0b7e940,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3733f878a2fd7fc62827a6fc6f490575e482a2abd3eb73d1084d561b465b5e8f,PodSandboxId:9d0b0de92727e6d2cd3f4ba269584b64263333e9c3adfbbfb661fb3425976f75,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1705371832990142994,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-965sn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0898f09-1a64-4beb-bfbf-de15f2e07038,},Annotations:map[string]string{io.kubernetes.container.hash: 351cd70e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea87fe00ab854457392b21f17d3385ad451a1c3d0172924ffd2bec07b216b2fe,PodSandboxId:1cf4df11eefca92904c4c760a4e6319f15d897df5d9739363b55b7f9ccd02be2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705371832725313133,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: 2d18fde8-ca44-4257-8475-100cd8b34ef8,},Annotations:map[string]string{io.kubernetes.container.hash: f55ed3e6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:916f890dd8a3237b7aa11ea2f120bf21e6050c6102802eee7700c25bedf5f30d,PodSandboxId:85f734e352e4355dfacc3f300e7778807f907e8613aeb4008f06122f2b66c948,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1705371829620002854,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-755b9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: ee1ea8c4-abfe-4fea-9f71-32840f6790ed,},Annotations:map[string]string{io.kubernetes.container.hash: adda7158,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ff0a671b659aada571d04e900c99f9db4c22ca5b1ff43013767e76aa06884db,PodSandboxId:0d6c68a6484a3a00e339dd77e49713673441b836a80b36cc0f58afe89372c19f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1705371827560989199,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gbvc2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74d63696-cb46-484d-937b-8883e6
f1df06,},Annotations:map[string]string{io.kubernetes.container.hash: f2bc5e57,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7eb8281f6da994f37b4479f1fd9f434654f465d3311f6341e03af21b259f343c,PodSandboxId:8098da9195564bb1a8e274de455d86a1d8983fd36e69f45dc8b7f4936fb5442b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1705371805661962751,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-835787,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 108085f55363e386b9f9c083ac579444,},Annotations:map[string]string{io.kubernetes
.container.hash: 156d26bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d898998193986881c6e265f064f078dc716114d2642e7c9b13934a85d0cb4139,PodSandboxId:e6c0238478893db38e1bdd26d6284ae01869e6e4f52988105b9003f1cd3c35df,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1705371805261907699,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-835787,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b27880b6b81ca11dc023b4901941ff6f,},Annotations:map[string]string{io.kubernetes.container.h
ash: dfe7758e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fb2dd284eb86bb7d11cc07ee831e36d48701e30a232765ce68fc00fb655469b,PodSandboxId:446b4f9eb6d25d0b8d7d14b9ea5d56a913519758972bd518ee3533da28910bc8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1705371805083664263,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-835787,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 230f2dad53142209ac2ae48ed27aa7b4,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7
a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db896f0c4e4b262a695b62aa18e39225a943e6aa444c72b10259142750b90238,PodSandboxId:61a49b4fa8c2575cb1d2f4f31c3de8fa7f3615dab5221dcafc93a7cbb9e5c805,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1705371804896922513,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-835787,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6adb137abb6e7ac4dcf8e50e41a3773b,},Annotations:map[string]string{io.kubernetes
.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=96925bf3-e705-49ad-b88b-05ff7da112df name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 02:24:49 multinode-835787 crio[718]: time="2024-01-16 02:24:49.144053096Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=fa91f341-be0d-4da0-ad70-225446ee224e name=/runtime.v1.RuntimeService/ListPodSandbox
	Jan 16 02:24:49 multinode-835787 crio[718]: time="2024-01-16 02:24:49.144312558Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:9f57bb2d2760223b871d92283fddc0362f3f614e9b2867f1f11f3863f50c2a25,Metadata:&PodSandboxMetadata{Name:busybox-5bc68d56bd-f6p29,Uid:de7231c8-3c4b-4fe1-a720-0e2b00c3881f,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1705371883109165010,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-5bc68d56bd-f6p29,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: de7231c8-3c4b-4fe1-a720-0e2b00c3881f,pod-template-hash: 5bc68d56bd,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-16T02:24:42.763418450Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1cf4df11eefca92904c4c760a4e6319f15d897df5d9739363b55b7f9ccd02be2,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:2d18fde8-ca44-4257-8475-100cd8b34ef8,Namespace:kube-system,Attempt:0,},St
ate:SANDBOX_READY,CreatedAt:1705371832240500790,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d18fde8-ca44-4257-8475-100cd8b34ef8,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/
tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-01-16T02:23:51.894971566Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9d0b0de92727e6d2cd3f4ba269584b64263333e9c3adfbbfb661fb3425976f75,Metadata:&PodSandboxMetadata{Name:coredns-5dd5756b68-965sn,Uid:a0898f09-1a64-4beb-bfbf-de15f2e07038,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1705371832234144236,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5dd5756b68-965sn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0898f09-1a64-4beb-bfbf-de15f2e07038,k8s-app: kube-dns,pod-template-hash: 5dd5756b68,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-16T02:23:51.885053620Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:85f734e352e4355dfacc3f300e7778807f907e8613aeb4008f06122f2b66c948,Metadata:&PodSandboxMetadata{Name:kindnet-755b9,Uid:ee1ea8c4-abfe-4fea-9f71-32840f6790ed,Namespace:kube-system,Attem
pt:0,},State:SANDBOX_READY,CreatedAt:1705371826468134257,Labels:map[string]string{app: kindnet,controller-revision-hash: 5666b6c4d,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-755b9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee1ea8c4-abfe-4fea-9f71-32840f6790ed,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-16T02:23:46.125569800Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:0d6c68a6484a3a00e339dd77e49713673441b836a80b36cc0f58afe89372c19f,Metadata:&PodSandboxMetadata{Name:kube-proxy-gbvc2,Uid:74d63696-cb46-484d-937b-8883e6f1df06,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1705371826441279883,Labels:map[string]string{controller-revision-hash: 8486c7d9cd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-gbvc2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74d63696-cb46-484d-937b-8883e6f1df06,k8s-app: kube-proxy,pod-temp
late-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-16T02:23:46.097038912Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e6c0238478893db38e1bdd26d6284ae01869e6e4f52988105b9003f1cd3c35df,Metadata:&PodSandboxMetadata{Name:kube-apiserver-multinode-835787,Uid:b27880b6b81ca11dc023b4901941ff6f,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1705371804489966635,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-multinode-835787,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b27880b6b81ca11dc023b4901941ff6f,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.50:8443,kubernetes.io/config.hash: b27880b6b81ca11dc023b4901941ff6f,kubernetes.io/config.seen: 2024-01-16T02:23:23.951140544Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:446b4f9eb6d25d0b8d7d14b9ea5d56a913
519758972bd518ee3533da28910bc8,Metadata:&PodSandboxMetadata{Name:kube-scheduler-multinode-835787,Uid:230f2dad53142209ac2ae48ed27aa7b4,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1705371804471717768,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-multinode-835787,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 230f2dad53142209ac2ae48ed27aa7b4,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 230f2dad53142209ac2ae48ed27aa7b4,kubernetes.io/config.seen: 2024-01-16T02:23:23.951138051Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:8098da9195564bb1a8e274de455d86a1d8983fd36e69f45dc8b7f4936fb5442b,Metadata:&PodSandboxMetadata{Name:etcd-multinode-835787,Uid:108085f55363e386b9f9c083ac579444,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1705371804458189190,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernete
s.pod.name: etcd-multinode-835787,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 108085f55363e386b9f9c083ac579444,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.50:2379,kubernetes.io/config.hash: 108085f55363e386b9f9c083ac579444,kubernetes.io/config.seen: 2024-01-16T02:23:23.951139400Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:61a49b4fa8c2575cb1d2f4f31c3de8fa7f3615dab5221dcafc93a7cbb9e5c805,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-multinode-835787,Uid:6adb137abb6e7ac4dcf8e50e41a3773b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1705371804410609711,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-multinode-835787,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6adb137abb6e7ac4dcf8e50e41a3773b,tier: control-plane,},Annotations:map[string]string{kubernet
es.io/config.hash: 6adb137abb6e7ac4dcf8e50e41a3773b,kubernetes.io/config.seen: 2024-01-16T02:23:23.951134789Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=fa91f341-be0d-4da0-ad70-225446ee224e name=/runtime.v1.RuntimeService/ListPodSandbox
	Jan 16 02:24:49 multinode-835787 crio[718]: time="2024-01-16 02:24:49.145229317Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=948c3b6a-38ed-404e-80c1-c6a78e3b4581 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 02:24:49 multinode-835787 crio[718]: time="2024-01-16 02:24:49.145286626Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=948c3b6a-38ed-404e-80c1-c6a78e3b4581 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 02:24:49 multinode-835787 crio[718]: time="2024-01-16 02:24:49.145466433Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7051212bc9bf2eadc16f8ce9d8cfc9a837898fabcc2971ef94f983f26d24566d,PodSandboxId:9f57bb2d2760223b871d92283fddc0362f3f614e9b2867f1f11f3863f50c2a25,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1705371884685084759,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-f6p29,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: de7231c8-3c4b-4fe1-a720-0e2b00c3881f,},Annotations:map[string]string{io.kubernetes.container.hash: c0b7e940,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3733f878a2fd7fc62827a6fc6f490575e482a2abd3eb73d1084d561b465b5e8f,PodSandboxId:9d0b0de92727e6d2cd3f4ba269584b64263333e9c3adfbbfb661fb3425976f75,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1705371832990142994,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-965sn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0898f09-1a64-4beb-bfbf-de15f2e07038,},Annotations:map[string]string{io.kubernetes.container.hash: 351cd70e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea87fe00ab854457392b21f17d3385ad451a1c3d0172924ffd2bec07b216b2fe,PodSandboxId:1cf4df11eefca92904c4c760a4e6319f15d897df5d9739363b55b7f9ccd02be2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705371832725313133,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: 2d18fde8-ca44-4257-8475-100cd8b34ef8,},Annotations:map[string]string{io.kubernetes.container.hash: f55ed3e6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:916f890dd8a3237b7aa11ea2f120bf21e6050c6102802eee7700c25bedf5f30d,PodSandboxId:85f734e352e4355dfacc3f300e7778807f907e8613aeb4008f06122f2b66c948,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1705371829620002854,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-755b9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: ee1ea8c4-abfe-4fea-9f71-32840f6790ed,},Annotations:map[string]string{io.kubernetes.container.hash: adda7158,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ff0a671b659aada571d04e900c99f9db4c22ca5b1ff43013767e76aa06884db,PodSandboxId:0d6c68a6484a3a00e339dd77e49713673441b836a80b36cc0f58afe89372c19f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1705371827560989199,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gbvc2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74d63696-cb46-484d-937b-8883e6
f1df06,},Annotations:map[string]string{io.kubernetes.container.hash: f2bc5e57,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7eb8281f6da994f37b4479f1fd9f434654f465d3311f6341e03af21b259f343c,PodSandboxId:8098da9195564bb1a8e274de455d86a1d8983fd36e69f45dc8b7f4936fb5442b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1705371805661962751,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-835787,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 108085f55363e386b9f9c083ac579444,},Annotations:map[string]string{io.kubernetes
.container.hash: 156d26bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d898998193986881c6e265f064f078dc716114d2642e7c9b13934a85d0cb4139,PodSandboxId:e6c0238478893db38e1bdd26d6284ae01869e6e4f52988105b9003f1cd3c35df,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1705371805261907699,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-835787,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b27880b6b81ca11dc023b4901941ff6f,},Annotations:map[string]string{io.kubernetes.container.h
ash: dfe7758e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fb2dd284eb86bb7d11cc07ee831e36d48701e30a232765ce68fc00fb655469b,PodSandboxId:446b4f9eb6d25d0b8d7d14b9ea5d56a913519758972bd518ee3533da28910bc8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1705371805083664263,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-835787,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 230f2dad53142209ac2ae48ed27aa7b4,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7
a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db896f0c4e4b262a695b62aa18e39225a943e6aa444c72b10259142750b90238,PodSandboxId:61a49b4fa8c2575cb1d2f4f31c3de8fa7f3615dab5221dcafc93a7cbb9e5c805,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1705371804896922513,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-835787,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6adb137abb6e7ac4dcf8e50e41a3773b,},Annotations:map[string]string{io.kubernetes
.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=948c3b6a-38ed-404e-80c1-c6a78e3b4581 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 02:24:49 multinode-835787 crio[718]: time="2024-01-16 02:24:49.157654226Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=fa971872-fef6-4cee-b4b3-f98f58173aeb name=/runtime.v1.RuntimeService/Version
	Jan 16 02:24:49 multinode-835787 crio[718]: time="2024-01-16 02:24:49.157713051Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=fa971872-fef6-4cee-b4b3-f98f58173aeb name=/runtime.v1.RuntimeService/Version
	Jan 16 02:24:49 multinode-835787 crio[718]: time="2024-01-16 02:24:49.159024292Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=fa7bbbf4-abdb-4720-af81-ae5c18cd210b name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 02:24:49 multinode-835787 crio[718]: time="2024-01-16 02:24:49.159389373Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705371889159377135,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125543,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=fa7bbbf4-abdb-4720-af81-ae5c18cd210b name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 02:24:49 multinode-835787 crio[718]: time="2024-01-16 02:24:49.160534284Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=237a2b67-66ea-41ec-b0b5-2c8fbfd842ed name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 02:24:49 multinode-835787 crio[718]: time="2024-01-16 02:24:49.160641360Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=237a2b67-66ea-41ec-b0b5-2c8fbfd842ed name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 02:24:49 multinode-835787 crio[718]: time="2024-01-16 02:24:49.160974301Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7051212bc9bf2eadc16f8ce9d8cfc9a837898fabcc2971ef94f983f26d24566d,PodSandboxId:9f57bb2d2760223b871d92283fddc0362f3f614e9b2867f1f11f3863f50c2a25,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1705371884685084759,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-f6p29,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: de7231c8-3c4b-4fe1-a720-0e2b00c3881f,},Annotations:map[string]string{io.kubernetes.container.hash: c0b7e940,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3733f878a2fd7fc62827a6fc6f490575e482a2abd3eb73d1084d561b465b5e8f,PodSandboxId:9d0b0de92727e6d2cd3f4ba269584b64263333e9c3adfbbfb661fb3425976f75,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1705371832990142994,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-965sn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0898f09-1a64-4beb-bfbf-de15f2e07038,},Annotations:map[string]string{io.kubernetes.container.hash: 351cd70e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea87fe00ab854457392b21f17d3385ad451a1c3d0172924ffd2bec07b216b2fe,PodSandboxId:1cf4df11eefca92904c4c760a4e6319f15d897df5d9739363b55b7f9ccd02be2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705371832725313133,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: 2d18fde8-ca44-4257-8475-100cd8b34ef8,},Annotations:map[string]string{io.kubernetes.container.hash: f55ed3e6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:916f890dd8a3237b7aa11ea2f120bf21e6050c6102802eee7700c25bedf5f30d,PodSandboxId:85f734e352e4355dfacc3f300e7778807f907e8613aeb4008f06122f2b66c948,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1705371829620002854,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-755b9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: ee1ea8c4-abfe-4fea-9f71-32840f6790ed,},Annotations:map[string]string{io.kubernetes.container.hash: adda7158,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ff0a671b659aada571d04e900c99f9db4c22ca5b1ff43013767e76aa06884db,PodSandboxId:0d6c68a6484a3a00e339dd77e49713673441b836a80b36cc0f58afe89372c19f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1705371827560989199,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gbvc2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74d63696-cb46-484d-937b-8883e6
f1df06,},Annotations:map[string]string{io.kubernetes.container.hash: f2bc5e57,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7eb8281f6da994f37b4479f1fd9f434654f465d3311f6341e03af21b259f343c,PodSandboxId:8098da9195564bb1a8e274de455d86a1d8983fd36e69f45dc8b7f4936fb5442b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1705371805661962751,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-835787,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 108085f55363e386b9f9c083ac579444,},Annotations:map[string]string{io.kubernetes
.container.hash: 156d26bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d898998193986881c6e265f064f078dc716114d2642e7c9b13934a85d0cb4139,PodSandboxId:e6c0238478893db38e1bdd26d6284ae01869e6e4f52988105b9003f1cd3c35df,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1705371805261907699,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-835787,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b27880b6b81ca11dc023b4901941ff6f,},Annotations:map[string]string{io.kubernetes.container.h
ash: dfe7758e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fb2dd284eb86bb7d11cc07ee831e36d48701e30a232765ce68fc00fb655469b,PodSandboxId:446b4f9eb6d25d0b8d7d14b9ea5d56a913519758972bd518ee3533da28910bc8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1705371805083664263,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-835787,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 230f2dad53142209ac2ae48ed27aa7b4,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7
a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db896f0c4e4b262a695b62aa18e39225a943e6aa444c72b10259142750b90238,PodSandboxId:61a49b4fa8c2575cb1d2f4f31c3de8fa7f3615dab5221dcafc93a7cbb9e5c805,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1705371804896922513,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-835787,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6adb137abb6e7ac4dcf8e50e41a3773b,},Annotations:map[string]string{io.kubernetes
.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=237a2b67-66ea-41ec-b0b5-2c8fbfd842ed name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 02:24:49 multinode-835787 crio[718]: time="2024-01-16 02:24:49.201571581Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=72866be4-cdcb-4011-b87e-061359c2b415 name=/runtime.v1.RuntimeService/Version
	Jan 16 02:24:49 multinode-835787 crio[718]: time="2024-01-16 02:24:49.201629586Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=72866be4-cdcb-4011-b87e-061359c2b415 name=/runtime.v1.RuntimeService/Version
	Jan 16 02:24:49 multinode-835787 crio[718]: time="2024-01-16 02:24:49.202694961Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=beb7fdaf-0791-4323-92b5-1ce2c8c12c8e name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 02:24:49 multinode-835787 crio[718]: time="2024-01-16 02:24:49.203168854Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705371889203154133,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125543,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=beb7fdaf-0791-4323-92b5-1ce2c8c12c8e name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 02:24:49 multinode-835787 crio[718]: time="2024-01-16 02:24:49.203942219Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=848b5e96-ad34-4136-b266-8cfbb82829ad name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 02:24:49 multinode-835787 crio[718]: time="2024-01-16 02:24:49.203988711Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=848b5e96-ad34-4136-b266-8cfbb82829ad name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 02:24:49 multinode-835787 crio[718]: time="2024-01-16 02:24:49.204160920Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7051212bc9bf2eadc16f8ce9d8cfc9a837898fabcc2971ef94f983f26d24566d,PodSandboxId:9f57bb2d2760223b871d92283fddc0362f3f614e9b2867f1f11f3863f50c2a25,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1705371884685084759,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-f6p29,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: de7231c8-3c4b-4fe1-a720-0e2b00c3881f,},Annotations:map[string]string{io.kubernetes.container.hash: c0b7e940,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3733f878a2fd7fc62827a6fc6f490575e482a2abd3eb73d1084d561b465b5e8f,PodSandboxId:9d0b0de92727e6d2cd3f4ba269584b64263333e9c3adfbbfb661fb3425976f75,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1705371832990142994,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-965sn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0898f09-1a64-4beb-bfbf-de15f2e07038,},Annotations:map[string]string{io.kubernetes.container.hash: 351cd70e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea87fe00ab854457392b21f17d3385ad451a1c3d0172924ffd2bec07b216b2fe,PodSandboxId:1cf4df11eefca92904c4c760a4e6319f15d897df5d9739363b55b7f9ccd02be2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705371832725313133,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: 2d18fde8-ca44-4257-8475-100cd8b34ef8,},Annotations:map[string]string{io.kubernetes.container.hash: f55ed3e6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:916f890dd8a3237b7aa11ea2f120bf21e6050c6102802eee7700c25bedf5f30d,PodSandboxId:85f734e352e4355dfacc3f300e7778807f907e8613aeb4008f06122f2b66c948,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1705371829620002854,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-755b9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: ee1ea8c4-abfe-4fea-9f71-32840f6790ed,},Annotations:map[string]string{io.kubernetes.container.hash: adda7158,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ff0a671b659aada571d04e900c99f9db4c22ca5b1ff43013767e76aa06884db,PodSandboxId:0d6c68a6484a3a00e339dd77e49713673441b836a80b36cc0f58afe89372c19f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1705371827560989199,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gbvc2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74d63696-cb46-484d-937b-8883e6
f1df06,},Annotations:map[string]string{io.kubernetes.container.hash: f2bc5e57,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7eb8281f6da994f37b4479f1fd9f434654f465d3311f6341e03af21b259f343c,PodSandboxId:8098da9195564bb1a8e274de455d86a1d8983fd36e69f45dc8b7f4936fb5442b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1705371805661962751,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-835787,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 108085f55363e386b9f9c083ac579444,},Annotations:map[string]string{io.kubernetes
.container.hash: 156d26bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d898998193986881c6e265f064f078dc716114d2642e7c9b13934a85d0cb4139,PodSandboxId:e6c0238478893db38e1bdd26d6284ae01869e6e4f52988105b9003f1cd3c35df,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1705371805261907699,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-835787,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b27880b6b81ca11dc023b4901941ff6f,},Annotations:map[string]string{io.kubernetes.container.h
ash: dfe7758e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fb2dd284eb86bb7d11cc07ee831e36d48701e30a232765ce68fc00fb655469b,PodSandboxId:446b4f9eb6d25d0b8d7d14b9ea5d56a913519758972bd518ee3533da28910bc8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1705371805083664263,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-835787,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 230f2dad53142209ac2ae48ed27aa7b4,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7
a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db896f0c4e4b262a695b62aa18e39225a943e6aa444c72b10259142750b90238,PodSandboxId:61a49b4fa8c2575cb1d2f4f31c3de8fa7f3615dab5221dcafc93a7cbb9e5c805,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1705371804896922513,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-835787,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6adb137abb6e7ac4dcf8e50e41a3773b,},Annotations:map[string]string{io.kubernetes
.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=848b5e96-ad34-4136-b266-8cfbb82829ad name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	7051212bc9bf2       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   4 seconds ago        Running             busybox                   0                   9f57bb2d27602       busybox-5bc68d56bd-f6p29
	3733f878a2fd7       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      56 seconds ago       Running             coredns                   0                   9d0b0de92727e       coredns-5dd5756b68-965sn
	ea87fe00ab854       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      56 seconds ago       Running             storage-provisioner       0                   1cf4df11eefca       storage-provisioner
	916f890dd8a32       c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc                                      59 seconds ago       Running             kindnet-cni               0                   85f734e352e43       kindnet-755b9
	8ff0a671b659a       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      About a minute ago   Running             kube-proxy                0                   0d6c68a6484a3       kube-proxy-gbvc2
	7eb8281f6da99       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      About a minute ago   Running             etcd                      0                   8098da9195564       etcd-multinode-835787
	d898998193986       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      About a minute ago   Running             kube-apiserver            0                   e6c0238478893       kube-apiserver-multinode-835787
	3fb2dd284eb86       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      About a minute ago   Running             kube-scheduler            0                   446b4f9eb6d25       kube-scheduler-multinode-835787
	db896f0c4e4b2       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      About a minute ago   Running             kube-controller-manager   0                   61a49b4fa8c25       kube-controller-manager-multinode-835787
	
	
	==> coredns [3733f878a2fd7fc62827a6fc6f490575e482a2abd3eb73d1084d561b465b5e8f] <==
	[INFO] 10.244.0.3:38531 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000139052s
	[INFO] 10.244.1.2:35685 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000244032s
	[INFO] 10.244.1.2:45307 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002098893s
	[INFO] 10.244.1.2:54028 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000253699s
	[INFO] 10.244.1.2:41920 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000158931s
	[INFO] 10.244.1.2:33009 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001689386s
	[INFO] 10.244.1.2:42583 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000110632s
	[INFO] 10.244.1.2:39209 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000129778s
	[INFO] 10.244.1.2:50364 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000073242s
	[INFO] 10.244.0.3:47906 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000212686s
	[INFO] 10.244.0.3:54769 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000088993s
	[INFO] 10.244.0.3:44836 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000074287s
	[INFO] 10.244.0.3:39977 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000139718s
	[INFO] 10.244.1.2:43303 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000171973s
	[INFO] 10.244.1.2:53920 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000215699s
	[INFO] 10.244.1.2:47143 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000120365s
	[INFO] 10.244.1.2:54751 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000148125s
	[INFO] 10.244.0.3:60423 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000074448s
	[INFO] 10.244.0.3:53887 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00010867s
	[INFO] 10.244.0.3:56048 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000067453s
	[INFO] 10.244.0.3:60099 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000130707s
	[INFO] 10.244.1.2:39832 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000176016s
	[INFO] 10.244.1.2:34662 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000173205s
	[INFO] 10.244.1.2:33249 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00014809s
	[INFO] 10.244.1.2:59209 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000120726s
	
	
	==> describe nodes <==
	Name:               multinode-835787
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-835787
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=880ec6d67bbc7fe8882aaa6543d5a07427c973fd
	                    minikube.k8s.io/name=multinode-835787
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_16T02_23_34_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Jan 2024 02:23:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-835787
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Jan 2024 02:24:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Jan 2024 02:23:51 +0000   Tue, 16 Jan 2024 02:23:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Jan 2024 02:23:51 +0000   Tue, 16 Jan 2024 02:23:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Jan 2024 02:23:51 +0000   Tue, 16 Jan 2024 02:23:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Jan 2024 02:23:51 +0000   Tue, 16 Jan 2024 02:23:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.50
	  Hostname:    multinode-835787
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 721446812514433291cd434ad703da0e
	  System UUID:                72144681-2514-4332-91cd-434ad703da0e
	  Boot ID:                    17a151c0-6e75-44d9-9419-14508483bebc
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-f6p29                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7s
	  kube-system                 coredns-5dd5756b68-965sn                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     64s
	  kube-system                 etcd-multinode-835787                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         76s
	  kube-system                 kindnet-755b9                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      64s
	  kube-system                 kube-apiserver-multinode-835787             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         76s
	  kube-system                 kube-controller-manager-multinode-835787    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         76s
	  kube-system                 kube-proxy-gbvc2                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         64s
	  kube-system                 kube-scheduler-multinode-835787             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         76s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         62s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 61s                kube-proxy       
	  Normal  Starting                 86s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  85s (x8 over 86s)  kubelet          Node multinode-835787 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    85s (x8 over 86s)  kubelet          Node multinode-835787 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     85s (x7 over 86s)  kubelet          Node multinode-835787 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  85s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 77s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  76s                kubelet          Node multinode-835787 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    76s                kubelet          Node multinode-835787 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     76s                kubelet          Node multinode-835787 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  76s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           64s                node-controller  Node multinode-835787 event: Registered Node multinode-835787 in Controller
	  Normal  NodeReady                58s                kubelet          Node multinode-835787 status is now: NodeReady
	
	
	Name:               multinode-835787-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-835787-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=880ec6d67bbc7fe8882aaa6543d5a07427c973fd
	                    minikube.k8s.io/name=multinode-835787
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_01_16T02_24_32_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Jan 2024 02:24:32 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-835787-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Jan 2024 02:24:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Jan 2024 02:24:40 +0000   Tue, 16 Jan 2024 02:24:32 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Jan 2024 02:24:40 +0000   Tue, 16 Jan 2024 02:24:32 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Jan 2024 02:24:40 +0000   Tue, 16 Jan 2024 02:24:32 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Jan 2024 02:24:40 +0000   Tue, 16 Jan 2024 02:24:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.15
	  Hostname:    multinode-835787-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 8d83e5dbc8204ad7954aeb6f0ba554db
	  System UUID:                8d83e5db-c820-4ad7-954a-eb6f0ba554db
	  Boot ID:                    3a522fc8-6938-4f5d-a9fd-3c0700e94c86
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-hzzdv    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7s
	  kube-system                 kindnet-nllfm               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      17s
	  kube-system                 kube-proxy-hxx8p            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 13s                kube-proxy       
	  Normal  NodeHasSufficientMemory  17s (x5 over 19s)  kubelet          Node multinode-835787-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    17s (x5 over 19s)  kubelet          Node multinode-835787-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     17s (x5 over 19s)  kubelet          Node multinode-835787-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           14s                node-controller  Node multinode-835787-m02 event: Registered Node multinode-835787-m02 in Controller
	  Normal  NodeReady                9s                 kubelet          Node multinode-835787-m02 status is now: NodeReady
	
	
	==> dmesg <==
	[Jan16 02:22] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.068240] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.411564] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.549972] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.147434] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[Jan16 02:23] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.330279] systemd-fstab-generator[642]: Ignoring "noauto" for root device
	[  +0.113840] systemd-fstab-generator[653]: Ignoring "noauto" for root device
	[  +0.143779] systemd-fstab-generator[666]: Ignoring "noauto" for root device
	[  +0.105694] systemd-fstab-generator[677]: Ignoring "noauto" for root device
	[  +0.222406] systemd-fstab-generator[701]: Ignoring "noauto" for root device
	[  +9.826381] systemd-fstab-generator[928]: Ignoring "noauto" for root device
	[  +9.311322] systemd-fstab-generator[1265]: Ignoring "noauto" for root device
	[ +20.686674] kauditd_printk_skb: 18 callbacks suppressed
	
	
	==> etcd [7eb8281f6da994f37b4479f1fd9f434654f465d3311f6341e03af21b259f343c] <==
	{"level":"info","ts":"2024-01-16T02:23:27.405959Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"eb1de673f525aa4c switched to configuration voters=(16941950758946187852)"}
	{"level":"info","ts":"2024-01-16T02:23:27.406092Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"c4909210040256fc","local-member-id":"eb1de673f525aa4c","added-peer-id":"eb1de673f525aa4c","added-peer-peer-urls":["https://192.168.39.50:2380"]}
	{"level":"info","ts":"2024-01-16T02:23:27.40762Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-01-16T02:23:27.40783Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.50:2380"}
	{"level":"info","ts":"2024-01-16T02:23:27.407989Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.50:2380"}
	{"level":"info","ts":"2024-01-16T02:23:27.408858Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"eb1de673f525aa4c","initial-advertise-peer-urls":["https://192.168.39.50:2380"],"listen-peer-urls":["https://192.168.39.50:2380"],"advertise-client-urls":["https://192.168.39.50:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.50:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-01-16T02:23:27.40896Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-01-16T02:23:28.081279Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"eb1de673f525aa4c is starting a new election at term 1"}
	{"level":"info","ts":"2024-01-16T02:23:28.081396Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"eb1de673f525aa4c became pre-candidate at term 1"}
	{"level":"info","ts":"2024-01-16T02:23:28.081433Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"eb1de673f525aa4c received MsgPreVoteResp from eb1de673f525aa4c at term 1"}
	{"level":"info","ts":"2024-01-16T02:23:28.081463Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"eb1de673f525aa4c became candidate at term 2"}
	{"level":"info","ts":"2024-01-16T02:23:28.081487Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"eb1de673f525aa4c received MsgVoteResp from eb1de673f525aa4c at term 2"}
	{"level":"info","ts":"2024-01-16T02:23:28.081514Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"eb1de673f525aa4c became leader at term 2"}
	{"level":"info","ts":"2024-01-16T02:23:28.081556Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: eb1de673f525aa4c elected leader eb1de673f525aa4c at term 2"}
	{"level":"info","ts":"2024-01-16T02:23:28.083614Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-16T02:23:28.084172Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"eb1de673f525aa4c","local-member-attributes":"{Name:multinode-835787 ClientURLs:[https://192.168.39.50:2379]}","request-path":"/0/members/eb1de673f525aa4c/attributes","cluster-id":"c4909210040256fc","publish-timeout":"7s"}
	{"level":"info","ts":"2024-01-16T02:23:28.084243Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-16T02:23:28.085118Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"c4909210040256fc","local-member-id":"eb1de673f525aa4c","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-16T02:23:28.085242Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-16T02:23:28.085282Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-16T02:23:28.085574Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-16T02:23:28.086029Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-01-16T02:23:28.086966Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.50:2379"}
	{"level":"info","ts":"2024-01-16T02:23:28.087092Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-01-16T02:23:28.08713Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 02:24:49 up 1 min,  0 users,  load average: 0.61, 0.34, 0.13
	Linux multinode-835787 5.10.57 #1 SMP Thu Dec 28 22:04:21 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kindnet [916f890dd8a3237b7aa11ea2f120bf21e6050c6102802eee7700c25bedf5f30d] <==
	I0116 02:23:50.560010       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0116 02:23:50.560107       1 main.go:107] hostIP = 192.168.39.50
	podIP = 192.168.39.50
	I0116 02:23:50.560435       1 main.go:116] setting mtu 1500 for CNI 
	I0116 02:23:50.560483       1 main.go:146] kindnetd IP family: "ipv4"
	I0116 02:23:50.560505       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0116 02:23:51.162593       1 main.go:223] Handling node with IPs: map[192.168.39.50:{}]
	I0116 02:23:51.257402       1 main.go:227] handling current node
	I0116 02:24:01.272632       1 main.go:223] Handling node with IPs: map[192.168.39.50:{}]
	I0116 02:24:01.272750       1 main.go:227] handling current node
	I0116 02:24:11.285591       1 main.go:223] Handling node with IPs: map[192.168.39.50:{}]
	I0116 02:24:11.285874       1 main.go:227] handling current node
	I0116 02:24:21.300247       1 main.go:223] Handling node with IPs: map[192.168.39.50:{}]
	I0116 02:24:21.300395       1 main.go:227] handling current node
	I0116 02:24:31.310113       1 main.go:223] Handling node with IPs: map[192.168.39.50:{}]
	I0116 02:24:31.310225       1 main.go:227] handling current node
	I0116 02:24:41.323282       1 main.go:223] Handling node with IPs: map[192.168.39.50:{}]
	I0116 02:24:41.323336       1 main.go:227] handling current node
	I0116 02:24:41.323352       1 main.go:223] Handling node with IPs: map[192.168.39.15:{}]
	I0116 02:24:41.323358       1 main.go:250] Node multinode-835787-m02 has CIDR [10.244.1.0/24] 
	I0116 02:24:41.323598       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.39.15 Flags: [] Table: 0} 
	
	
	==> kube-apiserver [d898998193986881c6e265f064f078dc716114d2642e7c9b13934a85d0cb4139] <==
	I0116 02:23:29.590999       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0116 02:23:29.591123       1 aggregator.go:166] initial CRD sync complete...
	I0116 02:23:29.591149       1 autoregister_controller.go:141] Starting autoregister controller
	I0116 02:23:29.591172       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0116 02:23:29.591195       1 cache.go:39] Caches are synced for autoregister controller
	I0116 02:23:29.606379       1 shared_informer.go:318] Caches are synced for configmaps
	I0116 02:23:29.606586       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0116 02:23:29.606622       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0116 02:23:29.606695       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0116 02:23:29.619653       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0116 02:23:30.472698       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0116 02:23:30.479221       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0116 02:23:30.479285       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0116 02:23:31.308410       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0116 02:23:31.356034       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0116 02:23:31.512112       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0116 02:23:31.531364       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.39.50]
	I0116 02:23:31.532456       1 controller.go:624] quota admission added evaluator for: endpoints
	I0116 02:23:31.543707       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0116 02:23:31.647084       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0116 02:23:32.898195       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0116 02:23:32.918384       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0116 02:23:32.935361       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0116 02:23:45.783391       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0116 02:23:45.879661       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [db896f0c4e4b262a695b62aa18e39225a943e6aa444c72b10259142750b90238] <==
	I0116 02:23:46.590458       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="103.09µs"
	I0116 02:23:46.604632       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="12.475307ms"
	I0116 02:23:51.888546       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="95.098µs"
	I0116 02:23:51.916460       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="76.156µs"
	I0116 02:23:53.328607       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="65.231µs"
	I0116 02:23:54.322186       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="19.684983ms"
	I0116 02:23:54.322743       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="74.028µs"
	I0116 02:23:55.784887       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I0116 02:24:32.164050       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-835787-m02\" does not exist"
	I0116 02:24:32.187272       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-835787-m02" podCIDRs=["10.244.1.0/24"]
	I0116 02:24:32.200652       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-hxx8p"
	I0116 02:24:32.211649       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-nllfm"
	I0116 02:24:35.789641       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-835787-m02"
	I0116 02:24:35.789960       1 event.go:307] "Event occurred" object="multinode-835787-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-835787-m02 event: Registered Node multinode-835787-m02 in Controller"
	I0116 02:24:40.273710       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-835787-m02"
	I0116 02:24:42.678438       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-5bc68d56bd to 2"
	I0116 02:24:42.726483       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-hzzdv"
	I0116 02:24:42.751883       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-f6p29"
	I0116 02:24:42.788351       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="111.05025ms"
	I0116 02:24:42.819542       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="31.120778ms"
	I0116 02:24:42.819982       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="275.768µs"
	I0116 02:24:44.861172       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="10.79949ms"
	I0116 02:24:44.862264       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="60.765µs"
	I0116 02:24:45.505370       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="9.849271ms"
	I0116 02:24:45.505536       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="41.965µs"
	
	
	==> kube-proxy [8ff0a671b659aada571d04e900c99f9db4c22ca5b1ff43013767e76aa06884db] <==
	I0116 02:23:47.822020       1 server_others.go:69] "Using iptables proxy"
	I0116 02:23:47.849641       1 node.go:141] Successfully retrieved node IP: 192.168.39.50
	I0116 02:23:47.905033       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0116 02:23:47.905090       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0116 02:23:47.907959       1 server_others.go:152] "Using iptables Proxier"
	I0116 02:23:47.908103       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0116 02:23:47.908749       1 server.go:846] "Version info" version="v1.28.4"
	I0116 02:23:47.908840       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0116 02:23:47.911024       1 config.go:188] "Starting service config controller"
	I0116 02:23:47.911493       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0116 02:23:47.911554       1 config.go:97] "Starting endpoint slice config controller"
	I0116 02:23:47.911560       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0116 02:23:47.913199       1 config.go:315] "Starting node config controller"
	I0116 02:23:47.913237       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0116 02:23:48.011649       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0116 02:23:48.011865       1 shared_informer.go:318] Caches are synced for service config
	I0116 02:23:48.013584       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [3fb2dd284eb86bb7d11cc07ee831e36d48701e30a232765ce68fc00fb655469b] <==
	W0116 02:23:29.630995       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0116 02:23:29.631026       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0116 02:23:29.636215       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0116 02:23:29.636320       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0116 02:23:30.481835       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0116 02:23:30.481894       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0116 02:23:30.570403       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0116 02:23:30.570466       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0116 02:23:30.578404       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0116 02:23:30.578457       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0116 02:23:30.602425       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0116 02:23:30.602485       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0116 02:23:30.631636       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0116 02:23:30.631712       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0116 02:23:30.842340       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0116 02:23:30.842452       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0116 02:23:30.909142       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0116 02:23:30.909275       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0116 02:23:31.018161       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0116 02:23:31.018216       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0116 02:23:31.018281       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0116 02:23:31.018288       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0116 02:23:31.056299       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0116 02:23:31.056356       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0116 02:23:32.694846       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Tue 2024-01-16 02:22:59 UTC, ends at Tue 2024-01-16 02:24:49 UTC. --
	Jan 16 02:23:46 multinode-835787 kubelet[1273]: I0116 02:23:46.133859    1273 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/74d63696-cb46-484d-937b-8883e6f1df06-kube-proxy\") pod \"kube-proxy-gbvc2\" (UID: \"74d63696-cb46-484d-937b-8883e6f1df06\") " pod="kube-system/kube-proxy-gbvc2"
	Jan 16 02:23:46 multinode-835787 kubelet[1273]: I0116 02:23:46.133931    1273 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/74d63696-cb46-484d-937b-8883e6f1df06-xtables-lock\") pod \"kube-proxy-gbvc2\" (UID: \"74d63696-cb46-484d-937b-8883e6f1df06\") " pod="kube-system/kube-proxy-gbvc2"
	Jan 16 02:23:46 multinode-835787 kubelet[1273]: I0116 02:23:46.133955    1273 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/74d63696-cb46-484d-937b-8883e6f1df06-lib-modules\") pod \"kube-proxy-gbvc2\" (UID: \"74d63696-cb46-484d-937b-8883e6f1df06\") " pod="kube-system/kube-proxy-gbvc2"
	Jan 16 02:23:46 multinode-835787 kubelet[1273]: I0116 02:23:46.133976    1273 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbh62\" (UniqueName: \"kubernetes.io/projected/74d63696-cb46-484d-937b-8883e6f1df06-kube-api-access-jbh62\") pod \"kube-proxy-gbvc2\" (UID: \"74d63696-cb46-484d-937b-8883e6f1df06\") " pod="kube-system/kube-proxy-gbvc2"
	Jan 16 02:23:46 multinode-835787 kubelet[1273]: I0116 02:23:46.235262    1273 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ee1ea8c4-abfe-4fea-9f71-32840f6790ed-xtables-lock\") pod \"kindnet-755b9\" (UID: \"ee1ea8c4-abfe-4fea-9f71-32840f6790ed\") " pod="kube-system/kindnet-755b9"
	Jan 16 02:23:46 multinode-835787 kubelet[1273]: I0116 02:23:46.235308    1273 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ee1ea8c4-abfe-4fea-9f71-32840f6790ed-lib-modules\") pod \"kindnet-755b9\" (UID: \"ee1ea8c4-abfe-4fea-9f71-32840f6790ed\") " pod="kube-system/kindnet-755b9"
	Jan 16 02:23:46 multinode-835787 kubelet[1273]: I0116 02:23:46.235349    1273 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/ee1ea8c4-abfe-4fea-9f71-32840f6790ed-cni-cfg\") pod \"kindnet-755b9\" (UID: \"ee1ea8c4-abfe-4fea-9f71-32840f6790ed\") " pod="kube-system/kindnet-755b9"
	Jan 16 02:23:46 multinode-835787 kubelet[1273]: I0116 02:23:46.235383    1273 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxnn6\" (UniqueName: \"kubernetes.io/projected/ee1ea8c4-abfe-4fea-9f71-32840f6790ed-kube-api-access-kxnn6\") pod \"kindnet-755b9\" (UID: \"ee1ea8c4-abfe-4fea-9f71-32840f6790ed\") " pod="kube-system/kindnet-755b9"
	Jan 16 02:23:50 multinode-835787 kubelet[1273]: I0116 02:23:50.288842    1273 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-gbvc2" podStartSLOduration=5.2887372280000005 podCreationTimestamp="2024-01-16 02:23:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-01-16 02:23:48.276438228 +0000 UTC m=+15.427992457" watchObservedRunningTime="2024-01-16 02:23:50.288737228 +0000 UTC m=+17.440291452"
	Jan 16 02:23:50 multinode-835787 kubelet[1273]: I0116 02:23:50.288935    1273 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-755b9" podStartSLOduration=5.2889182869999996 podCreationTimestamp="2024-01-16 02:23:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-01-16 02:23:50.286263409 +0000 UTC m=+17.437817618" watchObservedRunningTime="2024-01-16 02:23:50.288918287 +0000 UTC m=+17.440472516"
	Jan 16 02:23:51 multinode-835787 kubelet[1273]: I0116 02:23:51.839670    1273 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Jan 16 02:23:51 multinode-835787 kubelet[1273]: I0116 02:23:51.885259    1273 topology_manager.go:215] "Topology Admit Handler" podUID="a0898f09-1a64-4beb-bfbf-de15f2e07038" podNamespace="kube-system" podName="coredns-5dd5756b68-965sn"
	Jan 16 02:23:51 multinode-835787 kubelet[1273]: I0116 02:23:51.895122    1273 topology_manager.go:215] "Topology Admit Handler" podUID="2d18fde8-ca44-4257-8475-100cd8b34ef8" podNamespace="kube-system" podName="storage-provisioner"
	Jan 16 02:23:51 multinode-835787 kubelet[1273]: I0116 02:23:51.980080    1273 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a0898f09-1a64-4beb-bfbf-de15f2e07038-config-volume\") pod \"coredns-5dd5756b68-965sn\" (UID: \"a0898f09-1a64-4beb-bfbf-de15f2e07038\") " pod="kube-system/coredns-5dd5756b68-965sn"
	Jan 16 02:23:51 multinode-835787 kubelet[1273]: I0116 02:23:51.980162    1273 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7qs6\" (UniqueName: \"kubernetes.io/projected/a0898f09-1a64-4beb-bfbf-de15f2e07038-kube-api-access-q7qs6\") pod \"coredns-5dd5756b68-965sn\" (UID: \"a0898f09-1a64-4beb-bfbf-de15f2e07038\") " pod="kube-system/coredns-5dd5756b68-965sn"
	Jan 16 02:23:51 multinode-835787 kubelet[1273]: I0116 02:23:51.980189    1273 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfqzk\" (UniqueName: \"kubernetes.io/projected/2d18fde8-ca44-4257-8475-100cd8b34ef8-kube-api-access-lfqzk\") pod \"storage-provisioner\" (UID: \"2d18fde8-ca44-4257-8475-100cd8b34ef8\") " pod="kube-system/storage-provisioner"
	Jan 16 02:23:51 multinode-835787 kubelet[1273]: I0116 02:23:51.980212    1273 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/2d18fde8-ca44-4257-8475-100cd8b34ef8-tmp\") pod \"storage-provisioner\" (UID: \"2d18fde8-ca44-4257-8475-100cd8b34ef8\") " pod="kube-system/storage-provisioner"
	Jan 16 02:23:53 multinode-835787 kubelet[1273]: I0116 02:23:53.322629    1273 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=6.322590919 podCreationTimestamp="2024-01-16 02:23:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-01-16 02:23:53.298821254 +0000 UTC m=+20.450375483" watchObservedRunningTime="2024-01-16 02:23:53.322590919 +0000 UTC m=+20.474145140"
	Jan 16 02:23:54 multinode-835787 kubelet[1273]: I0116 02:23:54.300543    1273 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-965sn" podStartSLOduration=9.300499799 podCreationTimestamp="2024-01-16 02:23:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-01-16 02:23:53.326705879 +0000 UTC m=+20.478260110" watchObservedRunningTime="2024-01-16 02:23:54.300499799 +0000 UTC m=+21.452054026"
	Jan 16 02:24:33 multinode-835787 kubelet[1273]: E0116 02:24:33.246389    1273 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 16 02:24:33 multinode-835787 kubelet[1273]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 16 02:24:33 multinode-835787 kubelet[1273]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 16 02:24:33 multinode-835787 kubelet[1273]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 16 02:24:42 multinode-835787 kubelet[1273]: I0116 02:24:42.763667    1273 topology_manager.go:215] "Topology Admit Handler" podUID="de7231c8-3c4b-4fe1-a720-0e2b00c3881f" podNamespace="default" podName="busybox-5bc68d56bd-f6p29"
	Jan 16 02:24:42 multinode-835787 kubelet[1273]: I0116 02:24:42.808275    1273 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdpff\" (UniqueName: \"kubernetes.io/projected/de7231c8-3c4b-4fe1-a720-0e2b00c3881f-kube-api-access-cdpff\") pod \"busybox-5bc68d56bd-f6p29\" (UID: \"de7231c8-3c4b-4fe1-a720-0e2b00c3881f\") " pod="default/busybox-5bc68d56bd-f6p29"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-835787 -n multinode-835787
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-835787 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (3.41s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (689.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:311: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-835787
multinode_test.go:318: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-835787
E0116 02:27:27.515650  978482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/functional-941139/client.crt: no such file or directory
E0116 02:28:12.495529  978482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/addons-321835/client.crt: no such file or directory
multinode_test.go:318: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-835787: exit status 82 (2m0.304700798s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-835787"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:320: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-835787" : exit status 82
multinode_test.go:323: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-835787 --wait=true -v=8 --alsologtostderr
E0116 02:29:35.544422  978482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/addons-321835/client.crt: no such file or directory
E0116 02:29:50.169825  978482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/ingress-addon-legacy-473102/client.crt: no such file or directory
E0116 02:32:27.514798  978482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/functional-941139/client.crt: no such file or directory
E0116 02:33:12.496196  978482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/addons-321835/client.crt: no such file or directory
E0116 02:33:50.557920  978482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/functional-941139/client.crt: no such file or directory
E0116 02:34:50.170011  978482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/ingress-addon-legacy-473102/client.crt: no such file or directory
E0116 02:36:13.216566  978482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/ingress-addon-legacy-473102/client.crt: no such file or directory
E0116 02:37:27.515339  978482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/functional-941139/client.crt: no such file or directory
multinode_test.go:323: (dbg) Done: out/minikube-linux-amd64 start -p multinode-835787 --wait=true -v=8 --alsologtostderr: (9m26.19039299s)
multinode_test.go:328: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-835787
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-835787 -n multinode-835787
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-835787 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-835787 logs -n 25: (1.703170449s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-835787 ssh -n                                                                 | multinode-835787 | jenkins | v1.32.0 | 16 Jan 24 02:25 UTC | 16 Jan 24 02:25 UTC |
	|         | multinode-835787-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-835787 cp multinode-835787-m02:/home/docker/cp-test.txt                       | multinode-835787 | jenkins | v1.32.0 | 16 Jan 24 02:25 UTC | 16 Jan 24 02:25 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile13874096/001/cp-test_multinode-835787-m02.txt           |                  |         |         |                     |                     |
	| ssh     | multinode-835787 ssh -n                                                                 | multinode-835787 | jenkins | v1.32.0 | 16 Jan 24 02:25 UTC | 16 Jan 24 02:25 UTC |
	|         | multinode-835787-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-835787 cp multinode-835787-m02:/home/docker/cp-test.txt                       | multinode-835787 | jenkins | v1.32.0 | 16 Jan 24 02:25 UTC | 16 Jan 24 02:25 UTC |
	|         | multinode-835787:/home/docker/cp-test_multinode-835787-m02_multinode-835787.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-835787 ssh -n                                                                 | multinode-835787 | jenkins | v1.32.0 | 16 Jan 24 02:25 UTC | 16 Jan 24 02:25 UTC |
	|         | multinode-835787-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-835787 ssh -n multinode-835787 sudo cat                                       | multinode-835787 | jenkins | v1.32.0 | 16 Jan 24 02:25 UTC | 16 Jan 24 02:25 UTC |
	|         | /home/docker/cp-test_multinode-835787-m02_multinode-835787.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-835787 cp multinode-835787-m02:/home/docker/cp-test.txt                       | multinode-835787 | jenkins | v1.32.0 | 16 Jan 24 02:25 UTC | 16 Jan 24 02:25 UTC |
	|         | multinode-835787-m03:/home/docker/cp-test_multinode-835787-m02_multinode-835787-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-835787 ssh -n                                                                 | multinode-835787 | jenkins | v1.32.0 | 16 Jan 24 02:25 UTC | 16 Jan 24 02:25 UTC |
	|         | multinode-835787-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-835787 ssh -n multinode-835787-m03 sudo cat                                   | multinode-835787 | jenkins | v1.32.0 | 16 Jan 24 02:25 UTC | 16 Jan 24 02:25 UTC |
	|         | /home/docker/cp-test_multinode-835787-m02_multinode-835787-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-835787 cp testdata/cp-test.txt                                                | multinode-835787 | jenkins | v1.32.0 | 16 Jan 24 02:25 UTC | 16 Jan 24 02:25 UTC |
	|         | multinode-835787-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-835787 ssh -n                                                                 | multinode-835787 | jenkins | v1.32.0 | 16 Jan 24 02:25 UTC | 16 Jan 24 02:25 UTC |
	|         | multinode-835787-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-835787 cp multinode-835787-m03:/home/docker/cp-test.txt                       | multinode-835787 | jenkins | v1.32.0 | 16 Jan 24 02:25 UTC | 16 Jan 24 02:25 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile13874096/001/cp-test_multinode-835787-m03.txt           |                  |         |         |                     |                     |
	| ssh     | multinode-835787 ssh -n                                                                 | multinode-835787 | jenkins | v1.32.0 | 16 Jan 24 02:25 UTC | 16 Jan 24 02:25 UTC |
	|         | multinode-835787-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-835787 cp multinode-835787-m03:/home/docker/cp-test.txt                       | multinode-835787 | jenkins | v1.32.0 | 16 Jan 24 02:25 UTC | 16 Jan 24 02:25 UTC |
	|         | multinode-835787:/home/docker/cp-test_multinode-835787-m03_multinode-835787.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-835787 ssh -n                                                                 | multinode-835787 | jenkins | v1.32.0 | 16 Jan 24 02:25 UTC | 16 Jan 24 02:25 UTC |
	|         | multinode-835787-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-835787 ssh -n multinode-835787 sudo cat                                       | multinode-835787 | jenkins | v1.32.0 | 16 Jan 24 02:25 UTC | 16 Jan 24 02:25 UTC |
	|         | /home/docker/cp-test_multinode-835787-m03_multinode-835787.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-835787 cp multinode-835787-m03:/home/docker/cp-test.txt                       | multinode-835787 | jenkins | v1.32.0 | 16 Jan 24 02:25 UTC | 16 Jan 24 02:25 UTC |
	|         | multinode-835787-m02:/home/docker/cp-test_multinode-835787-m03_multinode-835787-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-835787 ssh -n                                                                 | multinode-835787 | jenkins | v1.32.0 | 16 Jan 24 02:25 UTC | 16 Jan 24 02:25 UTC |
	|         | multinode-835787-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-835787 ssh -n multinode-835787-m02 sudo cat                                   | multinode-835787 | jenkins | v1.32.0 | 16 Jan 24 02:25 UTC | 16 Jan 24 02:25 UTC |
	|         | /home/docker/cp-test_multinode-835787-m03_multinode-835787-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-835787 node stop m03                                                          | multinode-835787 | jenkins | v1.32.0 | 16 Jan 24 02:25 UTC | 16 Jan 24 02:25 UTC |
	| node    | multinode-835787 node start                                                             | multinode-835787 | jenkins | v1.32.0 | 16 Jan 24 02:25 UTC | 16 Jan 24 02:26 UTC |
	|         | m03 --alsologtostderr                                                                   |                  |         |         |                     |                     |
	| node    | list -p multinode-835787                                                                | multinode-835787 | jenkins | v1.32.0 | 16 Jan 24 02:26 UTC |                     |
	| stop    | -p multinode-835787                                                                     | multinode-835787 | jenkins | v1.32.0 | 16 Jan 24 02:26 UTC |                     |
	| start   | -p multinode-835787                                                                     | multinode-835787 | jenkins | v1.32.0 | 16 Jan 24 02:28 UTC | 16 Jan 24 02:37 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-835787                                                                | multinode-835787 | jenkins | v1.32.0 | 16 Jan 24 02:37 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/16 02:28:14
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0116 02:28:14.199153  994955 out.go:296] Setting OutFile to fd 1 ...
	I0116 02:28:14.199356  994955 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 02:28:14.199367  994955 out.go:309] Setting ErrFile to fd 2...
	I0116 02:28:14.199372  994955 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 02:28:14.199583  994955 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17967-971255/.minikube/bin
	I0116 02:28:14.200162  994955 out.go:303] Setting JSON to false
	I0116 02:28:14.201185  994955 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":11444,"bootTime":1705360651,"procs":189,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1048-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0116 02:28:14.201287  994955 start.go:138] virtualization: kvm guest
	I0116 02:28:14.204062  994955 out.go:177] * [multinode-835787] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0116 02:28:14.205684  994955 out.go:177]   - MINIKUBE_LOCATION=17967
	I0116 02:28:14.205752  994955 notify.go:220] Checking for updates...
	I0116 02:28:14.207160  994955 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0116 02:28:14.208797  994955 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17967-971255/kubeconfig
	I0116 02:28:14.210641  994955 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17967-971255/.minikube
	I0116 02:28:14.211925  994955 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0116 02:28:14.213279  994955 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0116 02:28:14.215368  994955 config.go:182] Loaded profile config "multinode-835787": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 02:28:14.215466  994955 driver.go:392] Setting default libvirt URI to qemu:///system
	I0116 02:28:14.215939  994955 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 02:28:14.216014  994955 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:28:14.231718  994955 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39345
	I0116 02:28:14.232262  994955 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:28:14.232916  994955 main.go:141] libmachine: Using API Version  1
	I0116 02:28:14.232944  994955 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:28:14.233412  994955 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:28:14.233654  994955 main.go:141] libmachine: (multinode-835787) Calling .DriverName
	I0116 02:28:14.271751  994955 out.go:177] * Using the kvm2 driver based on existing profile
	I0116 02:28:14.272900  994955 start.go:298] selected driver: kvm2
	I0116 02:28:14.272916  994955 start.go:902] validating driver "kvm2" against &{Name:multinode-835787 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.28.4 ClusterName:multinode-835787 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.50 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.15 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.123 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false
ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 02:28:14.273076  994955 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0116 02:28:14.273438  994955 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0116 02:28:14.273539  994955 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17967-971255/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0116 02:28:14.289135  994955 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0116 02:28:14.289899  994955 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0116 02:28:14.289968  994955 cni.go:84] Creating CNI manager for ""
	I0116 02:28:14.289980  994955 cni.go:136] 3 nodes found, recommending kindnet
	I0116 02:28:14.289987  994955 start_flags.go:321] config:
	{Name:multinode-835787 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-835787 Namespace:default APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.50 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.15 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.123 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-prov
isioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwareP
ath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 02:28:14.290208  994955 iso.go:125] acquiring lock: {Name:mkc0ce0b4b435c5fb7570521b794e5982bb7bf6d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0116 02:28:14.292184  994955 out.go:177] * Starting control plane node multinode-835787 in cluster multinode-835787
	I0116 02:28:14.293525  994955 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0116 02:28:14.293577  994955 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17967-971255/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0116 02:28:14.293587  994955 cache.go:56] Caching tarball of preloaded images
	I0116 02:28:14.293684  994955 preload.go:174] Found /home/jenkins/minikube-integration/17967-971255/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0116 02:28:14.293699  994955 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0116 02:28:14.293883  994955 profile.go:148] Saving config to /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/multinode-835787/config.json ...
	I0116 02:28:14.294127  994955 start.go:365] acquiring machines lock for multinode-835787: {Name:mk61734672272adcae1bdaf20e72828e69d219db Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0116 02:28:14.294183  994955 start.go:369] acquired machines lock for "multinode-835787" in 30.634µs
	I0116 02:28:14.294202  994955 start.go:96] Skipping create...Using existing machine configuration
	I0116 02:28:14.294208  994955 fix.go:54] fixHost starting: 
	I0116 02:28:14.294500  994955 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 02:28:14.294536  994955 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:28:14.309503  994955 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39885
	I0116 02:28:14.310051  994955 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:28:14.310526  994955 main.go:141] libmachine: Using API Version  1
	I0116 02:28:14.310553  994955 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:28:14.310955  994955 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:28:14.311204  994955 main.go:141] libmachine: (multinode-835787) Calling .DriverName
	I0116 02:28:14.311332  994955 main.go:141] libmachine: (multinode-835787) Calling .GetState
	I0116 02:28:14.312950  994955 fix.go:102] recreateIfNeeded on multinode-835787: state=Running err=<nil>
	W0116 02:28:14.312970  994955 fix.go:128] unexpected machine state, will restart: <nil>
	I0116 02:28:14.315925  994955 out.go:177] * Updating the running kvm2 "multinode-835787" VM ...
	I0116 02:28:14.317324  994955 machine.go:88] provisioning docker machine ...
	I0116 02:28:14.317353  994955 main.go:141] libmachine: (multinode-835787) Calling .DriverName
	I0116 02:28:14.317626  994955 main.go:141] libmachine: (multinode-835787) Calling .GetMachineName
	I0116 02:28:14.317822  994955 buildroot.go:166] provisioning hostname "multinode-835787"
	I0116 02:28:14.317843  994955 main.go:141] libmachine: (multinode-835787) Calling .GetMachineName
	I0116 02:28:14.317970  994955 main.go:141] libmachine: (multinode-835787) Calling .GetSSHHostname
	I0116 02:28:14.320517  994955 main.go:141] libmachine: (multinode-835787) DBG | domain multinode-835787 has defined MAC address 52:54:00:20:87:3c in network mk-multinode-835787
	I0116 02:28:14.320995  994955 main.go:141] libmachine: (multinode-835787) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:87:3c", ip: ""} in network mk-multinode-835787: {Iface:virbr1 ExpiryTime:2024-01-16 03:23:03 +0000 UTC Type:0 Mac:52:54:00:20:87:3c Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:multinode-835787 Clientid:01:52:54:00:20:87:3c}
	I0116 02:28:14.321028  994955 main.go:141] libmachine: (multinode-835787) DBG | domain multinode-835787 has defined IP address 192.168.39.50 and MAC address 52:54:00:20:87:3c in network mk-multinode-835787
	I0116 02:28:14.321134  994955 main.go:141] libmachine: (multinode-835787) Calling .GetSSHPort
	I0116 02:28:14.321347  994955 main.go:141] libmachine: (multinode-835787) Calling .GetSSHKeyPath
	I0116 02:28:14.321459  994955 main.go:141] libmachine: (multinode-835787) Calling .GetSSHKeyPath
	I0116 02:28:14.321620  994955 main.go:141] libmachine: (multinode-835787) Calling .GetSSHUsername
	I0116 02:28:14.321864  994955 main.go:141] libmachine: Using SSH client type: native
	I0116 02:28:14.322281  994955 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.50 22 <nil> <nil>}
	I0116 02:28:14.322304  994955 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-835787 && echo "multinode-835787" | sudo tee /etc/hostname
	I0116 02:28:32.806161  994955 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.50:22: connect: no route to host
	I0116 02:28:38.886216  994955 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.50:22: connect: no route to host
	I0116 02:28:41.958165  994955 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.50:22: connect: no route to host
	I0116 02:28:48.038136  994955 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.50:22: connect: no route to host
	I0116 02:28:51.110208  994955 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.50:22: connect: no route to host
	I0116 02:28:57.190192  994955 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.50:22: connect: no route to host
	I0116 02:29:00.262177  994955 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.50:22: connect: no route to host
	I0116 02:29:06.342194  994955 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.50:22: connect: no route to host
	I0116 02:29:09.414172  994955 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.50:22: connect: no route to host
	I0116 02:29:15.494147  994955 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.50:22: connect: no route to host
	I0116 02:29:18.566126  994955 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.50:22: connect: no route to host
	I0116 02:29:24.646167  994955 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.50:22: connect: no route to host
	I0116 02:29:27.718163  994955 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.50:22: connect: no route to host
	I0116 02:29:33.798205  994955 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.50:22: connect: no route to host
	I0116 02:29:36.870029  994955 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.50:22: connect: no route to host
	I0116 02:29:42.950157  994955 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.50:22: connect: no route to host
	I0116 02:29:46.022098  994955 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.50:22: connect: no route to host
	I0116 02:29:52.102165  994955 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.50:22: connect: no route to host
	I0116 02:29:55.174207  994955 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.50:22: connect: no route to host
	I0116 02:30:01.254191  994955 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.50:22: connect: no route to host
	I0116 02:30:04.326079  994955 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.50:22: connect: no route to host
	I0116 02:30:10.406115  994955 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.50:22: connect: no route to host
	I0116 02:30:13.478117  994955 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.50:22: connect: no route to host
	I0116 02:30:19.558082  994955 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.50:22: connect: no route to host
	I0116 02:30:22.630067  994955 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.50:22: connect: no route to host
	I0116 02:30:28.710132  994955 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.50:22: connect: no route to host
	I0116 02:30:31.782162  994955 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.50:22: connect: no route to host
	I0116 02:30:37.862147  994955 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.50:22: connect: no route to host
	I0116 02:30:40.934167  994955 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.50:22: connect: no route to host
	I0116 02:30:47.014114  994955 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.50:22: connect: no route to host
	I0116 02:30:50.086111  994955 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.50:22: connect: no route to host
	I0116 02:30:56.166166  994955 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.50:22: connect: no route to host
	I0116 02:30:59.238061  994955 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.50:22: connect: no route to host
	I0116 02:31:05.318104  994955 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.50:22: connect: no route to host
	I0116 02:31:08.390115  994955 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.50:22: connect: no route to host
	I0116 02:31:14.470074  994955 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.50:22: connect: no route to host
	I0116 02:31:17.542194  994955 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.50:22: connect: no route to host
	I0116 02:31:23.622066  994955 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.50:22: connect: no route to host
	I0116 02:31:26.694049  994955 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.50:22: connect: no route to host
	I0116 02:31:32.774143  994955 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.50:22: connect: no route to host
	I0116 02:31:35.846057  994955 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.50:22: connect: no route to host
	I0116 02:31:41.926089  994955 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.50:22: connect: no route to host
	I0116 02:31:44.998160  994955 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.50:22: connect: no route to host
	I0116 02:31:51.082060  994955 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.50:22: connect: no route to host
	I0116 02:31:54.150101  994955 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.50:22: connect: no route to host
	I0116 02:32:00.230086  994955 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.50:22: connect: no route to host
	I0116 02:32:03.302110  994955 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.50:22: connect: no route to host
	I0116 02:32:09.382134  994955 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.50:22: connect: no route to host
	I0116 02:32:12.454174  994955 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.50:22: connect: no route to host
	I0116 02:32:18.534221  994955 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.50:22: connect: no route to host
	I0116 02:32:21.606145  994955 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.50:22: connect: no route to host
	I0116 02:32:27.686108  994955 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.50:22: connect: no route to host
	I0116 02:32:30.758162  994955 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.50:22: connect: no route to host
	I0116 02:32:36.838130  994955 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.50:22: connect: no route to host
	I0116 02:32:39.910095  994955 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.50:22: connect: no route to host
	I0116 02:32:45.990093  994955 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.50:22: connect: no route to host
	I0116 02:32:49.062180  994955 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.50:22: connect: no route to host
	I0116 02:32:55.142145  994955 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.50:22: connect: no route to host
	I0116 02:32:58.214172  994955 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.50:22: connect: no route to host
	I0116 02:33:04.294106  994955 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.50:22: connect: no route to host
	I0116 02:33:07.296929  994955 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0116 02:33:07.296990  994955 main.go:141] libmachine: (multinode-835787) Calling .GetSSHHostname
	I0116 02:33:07.298899  994955 machine.go:91] provisioned docker machine in 4m52.981554552s
	I0116 02:33:07.298975  994955 fix.go:56] fixHost completed within 4m53.004767195s
	I0116 02:33:07.298986  994955 start.go:83] releasing machines lock for "multinode-835787", held for 4m53.004791268s
	W0116 02:33:07.299004  994955 start.go:694] error starting host: provision: host is not running
	W0116 02:33:07.299148  994955 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0116 02:33:07.299163  994955 start.go:709] Will try again in 5 seconds ...
	I0116 02:33:12.301341  994955 start.go:365] acquiring machines lock for multinode-835787: {Name:mk61734672272adcae1bdaf20e72828e69d219db Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0116 02:33:12.301511  994955 start.go:369] acquired machines lock for "multinode-835787" in 87.389µs
	I0116 02:33:12.301544  994955 start.go:96] Skipping create...Using existing machine configuration
	I0116 02:33:12.301553  994955 fix.go:54] fixHost starting: 
	I0116 02:33:12.301948  994955 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 02:33:12.301981  994955 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:33:12.317424  994955 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36909
	I0116 02:33:12.318040  994955 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:33:12.318659  994955 main.go:141] libmachine: Using API Version  1
	I0116 02:33:12.318691  994955 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:33:12.319263  994955 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:33:12.319464  994955 main.go:141] libmachine: (multinode-835787) Calling .DriverName
	I0116 02:33:12.319595  994955 main.go:141] libmachine: (multinode-835787) Calling .GetState
	I0116 02:33:12.321523  994955 fix.go:102] recreateIfNeeded on multinode-835787: state=Stopped err=<nil>
	I0116 02:33:12.321544  994955 main.go:141] libmachine: (multinode-835787) Calling .DriverName
	W0116 02:33:12.321712  994955 fix.go:128] unexpected machine state, will restart: <nil>
	I0116 02:33:12.324335  994955 out.go:177] * Restarting existing kvm2 VM for "multinode-835787" ...
	I0116 02:33:12.325904  994955 main.go:141] libmachine: (multinode-835787) Calling .Start
	I0116 02:33:12.326065  994955 main.go:141] libmachine: (multinode-835787) Ensuring networks are active...
	I0116 02:33:12.326925  994955 main.go:141] libmachine: (multinode-835787) Ensuring network default is active
	I0116 02:33:12.327314  994955 main.go:141] libmachine: (multinode-835787) Ensuring network mk-multinode-835787 is active
	I0116 02:33:12.327975  994955 main.go:141] libmachine: (multinode-835787) Getting domain xml...
	I0116 02:33:12.328696  994955 main.go:141] libmachine: (multinode-835787) Creating domain...
	I0116 02:33:13.541404  994955 main.go:141] libmachine: (multinode-835787) Waiting to get IP...
	I0116 02:33:13.542316  994955 main.go:141] libmachine: (multinode-835787) DBG | domain multinode-835787 has defined MAC address 52:54:00:20:87:3c in network mk-multinode-835787
	I0116 02:33:13.542797  994955 main.go:141] libmachine: (multinode-835787) DBG | unable to find current IP address of domain multinode-835787 in network mk-multinode-835787
	I0116 02:33:13.542882  994955 main.go:141] libmachine: (multinode-835787) DBG | I0116 02:33:13.542793  995748 retry.go:31] will retry after 209.742578ms: waiting for machine to come up
	I0116 02:33:13.754494  994955 main.go:141] libmachine: (multinode-835787) DBG | domain multinode-835787 has defined MAC address 52:54:00:20:87:3c in network mk-multinode-835787
	I0116 02:33:13.755065  994955 main.go:141] libmachine: (multinode-835787) DBG | unable to find current IP address of domain multinode-835787 in network mk-multinode-835787
	I0116 02:33:13.755107  994955 main.go:141] libmachine: (multinode-835787) DBG | I0116 02:33:13.755024  995748 retry.go:31] will retry after 264.81438ms: waiting for machine to come up
	I0116 02:33:14.021564  994955 main.go:141] libmachine: (multinode-835787) DBG | domain multinode-835787 has defined MAC address 52:54:00:20:87:3c in network mk-multinode-835787
	I0116 02:33:14.022090  994955 main.go:141] libmachine: (multinode-835787) DBG | unable to find current IP address of domain multinode-835787 in network mk-multinode-835787
	I0116 02:33:14.022119  994955 main.go:141] libmachine: (multinode-835787) DBG | I0116 02:33:14.022030  995748 retry.go:31] will retry after 317.040657ms: waiting for machine to come up
	I0116 02:33:14.340407  994955 main.go:141] libmachine: (multinode-835787) DBG | domain multinode-835787 has defined MAC address 52:54:00:20:87:3c in network mk-multinode-835787
	I0116 02:33:14.340903  994955 main.go:141] libmachine: (multinode-835787) DBG | unable to find current IP address of domain multinode-835787 in network mk-multinode-835787
	I0116 02:33:14.340936  994955 main.go:141] libmachine: (multinode-835787) DBG | I0116 02:33:14.340836  995748 retry.go:31] will retry after 539.518515ms: waiting for machine to come up
	I0116 02:33:14.881716  994955 main.go:141] libmachine: (multinode-835787) DBG | domain multinode-835787 has defined MAC address 52:54:00:20:87:3c in network mk-multinode-835787
	I0116 02:33:14.882283  994955 main.go:141] libmachine: (multinode-835787) DBG | unable to find current IP address of domain multinode-835787 in network mk-multinode-835787
	I0116 02:33:14.882314  994955 main.go:141] libmachine: (multinode-835787) DBG | I0116 02:33:14.882200  995748 retry.go:31] will retry after 722.251907ms: waiting for machine to come up
	I0116 02:33:15.606029  994955 main.go:141] libmachine: (multinode-835787) DBG | domain multinode-835787 has defined MAC address 52:54:00:20:87:3c in network mk-multinode-835787
	I0116 02:33:15.606503  994955 main.go:141] libmachine: (multinode-835787) DBG | unable to find current IP address of domain multinode-835787 in network mk-multinode-835787
	I0116 02:33:15.606536  994955 main.go:141] libmachine: (multinode-835787) DBG | I0116 02:33:15.606445  995748 retry.go:31] will retry after 952.039629ms: waiting for machine to come up
	I0116 02:33:16.560655  994955 main.go:141] libmachine: (multinode-835787) DBG | domain multinode-835787 has defined MAC address 52:54:00:20:87:3c in network mk-multinode-835787
	I0116 02:33:16.561080  994955 main.go:141] libmachine: (multinode-835787) DBG | unable to find current IP address of domain multinode-835787 in network mk-multinode-835787
	I0116 02:33:16.561110  994955 main.go:141] libmachine: (multinode-835787) DBG | I0116 02:33:16.561022  995748 retry.go:31] will retry after 1.171930534s: waiting for machine to come up
	I0116 02:33:17.734957  994955 main.go:141] libmachine: (multinode-835787) DBG | domain multinode-835787 has defined MAC address 52:54:00:20:87:3c in network mk-multinode-835787
	I0116 02:33:17.735406  994955 main.go:141] libmachine: (multinode-835787) DBG | unable to find current IP address of domain multinode-835787 in network mk-multinode-835787
	I0116 02:33:17.735435  994955 main.go:141] libmachine: (multinode-835787) DBG | I0116 02:33:17.735359  995748 retry.go:31] will retry after 1.440981867s: waiting for machine to come up
	I0116 02:33:19.178083  994955 main.go:141] libmachine: (multinode-835787) DBG | domain multinode-835787 has defined MAC address 52:54:00:20:87:3c in network mk-multinode-835787
	I0116 02:33:19.178618  994955 main.go:141] libmachine: (multinode-835787) DBG | unable to find current IP address of domain multinode-835787 in network mk-multinode-835787
	I0116 02:33:19.178644  994955 main.go:141] libmachine: (multinode-835787) DBG | I0116 02:33:19.178561  995748 retry.go:31] will retry after 1.67284992s: waiting for machine to come up
	I0116 02:33:20.853715  994955 main.go:141] libmachine: (multinode-835787) DBG | domain multinode-835787 has defined MAC address 52:54:00:20:87:3c in network mk-multinode-835787
	I0116 02:33:20.854322  994955 main.go:141] libmachine: (multinode-835787) DBG | unable to find current IP address of domain multinode-835787 in network mk-multinode-835787
	I0116 02:33:20.854358  994955 main.go:141] libmachine: (multinode-835787) DBG | I0116 02:33:20.854259  995748 retry.go:31] will retry after 1.745609643s: waiting for machine to come up
	I0116 02:33:22.601720  994955 main.go:141] libmachine: (multinode-835787) DBG | domain multinode-835787 has defined MAC address 52:54:00:20:87:3c in network mk-multinode-835787
	I0116 02:33:22.602178  994955 main.go:141] libmachine: (multinode-835787) DBG | unable to find current IP address of domain multinode-835787 in network mk-multinode-835787
	I0116 02:33:22.602213  994955 main.go:141] libmachine: (multinode-835787) DBG | I0116 02:33:22.602117  995748 retry.go:31] will retry after 1.981621894s: waiting for machine to come up
	I0116 02:33:24.586211  994955 main.go:141] libmachine: (multinode-835787) DBG | domain multinode-835787 has defined MAC address 52:54:00:20:87:3c in network mk-multinode-835787
	I0116 02:33:24.586845  994955 main.go:141] libmachine: (multinode-835787) DBG | unable to find current IP address of domain multinode-835787 in network mk-multinode-835787
	I0116 02:33:24.586876  994955 main.go:141] libmachine: (multinode-835787) DBG | I0116 02:33:24.586799  995748 retry.go:31] will retry after 2.635689055s: waiting for machine to come up
	I0116 02:33:27.224368  994955 main.go:141] libmachine: (multinode-835787) DBG | domain multinode-835787 has defined MAC address 52:54:00:20:87:3c in network mk-multinode-835787
	I0116 02:33:27.224848  994955 main.go:141] libmachine: (multinode-835787) DBG | unable to find current IP address of domain multinode-835787 in network mk-multinode-835787
	I0116 02:33:27.224882  994955 main.go:141] libmachine: (multinode-835787) DBG | I0116 02:33:27.224788  995748 retry.go:31] will retry after 2.934974304s: waiting for machine to come up
	I0116 02:33:30.162975  994955 main.go:141] libmachine: (multinode-835787) DBG | domain multinode-835787 has defined MAC address 52:54:00:20:87:3c in network mk-multinode-835787
	I0116 02:33:30.163428  994955 main.go:141] libmachine: (multinode-835787) Found IP for machine: 192.168.39.50
	I0116 02:33:30.163462  994955 main.go:141] libmachine: (multinode-835787) Reserving static IP address...
	I0116 02:33:30.163484  994955 main.go:141] libmachine: (multinode-835787) DBG | domain multinode-835787 has current primary IP address 192.168.39.50 and MAC address 52:54:00:20:87:3c in network mk-multinode-835787
	I0116 02:33:30.163871  994955 main.go:141] libmachine: (multinode-835787) DBG | found host DHCP lease matching {name: "multinode-835787", mac: "52:54:00:20:87:3c", ip: "192.168.39.50"} in network mk-multinode-835787: {Iface:virbr1 ExpiryTime:2024-01-16 03:33:24 +0000 UTC Type:0 Mac:52:54:00:20:87:3c Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:multinode-835787 Clientid:01:52:54:00:20:87:3c}
	I0116 02:33:30.163889  994955 main.go:141] libmachine: (multinode-835787) Reserved static IP address: 192.168.39.50
	I0116 02:33:30.163910  994955 main.go:141] libmachine: (multinode-835787) DBG | skip adding static IP to network mk-multinode-835787 - found existing host DHCP lease matching {name: "multinode-835787", mac: "52:54:00:20:87:3c", ip: "192.168.39.50"}
	I0116 02:33:30.163921  994955 main.go:141] libmachine: (multinode-835787) DBG | Getting to WaitForSSH function...
	I0116 02:33:30.163931  994955 main.go:141] libmachine: (multinode-835787) Waiting for SSH to be available...
	I0116 02:33:30.166161  994955 main.go:141] libmachine: (multinode-835787) DBG | domain multinode-835787 has defined MAC address 52:54:00:20:87:3c in network mk-multinode-835787
	I0116 02:33:30.166497  994955 main.go:141] libmachine: (multinode-835787) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:87:3c", ip: ""} in network mk-multinode-835787: {Iface:virbr1 ExpiryTime:2024-01-16 03:33:24 +0000 UTC Type:0 Mac:52:54:00:20:87:3c Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:multinode-835787 Clientid:01:52:54:00:20:87:3c}
	I0116 02:33:30.166532  994955 main.go:141] libmachine: (multinode-835787) DBG | domain multinode-835787 has defined IP address 192.168.39.50 and MAC address 52:54:00:20:87:3c in network mk-multinode-835787
	I0116 02:33:30.166610  994955 main.go:141] libmachine: (multinode-835787) DBG | Using SSH client type: external
	I0116 02:33:30.166645  994955 main.go:141] libmachine: (multinode-835787) DBG | Using SSH private key: /home/jenkins/minikube-integration/17967-971255/.minikube/machines/multinode-835787/id_rsa (-rw-------)
	I0116 02:33:30.166687  994955 main.go:141] libmachine: (multinode-835787) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.50 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17967-971255/.minikube/machines/multinode-835787/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0116 02:33:30.166703  994955 main.go:141] libmachine: (multinode-835787) DBG | About to run SSH command:
	I0116 02:33:30.166733  994955 main.go:141] libmachine: (multinode-835787) DBG | exit 0
	I0116 02:33:30.262053  994955 main.go:141] libmachine: (multinode-835787) DBG | SSH cmd err, output: <nil>: 
	I0116 02:33:30.262482  994955 main.go:141] libmachine: (multinode-835787) Calling .GetConfigRaw
	I0116 02:33:30.263133  994955 main.go:141] libmachine: (multinode-835787) Calling .GetIP
	I0116 02:33:30.265842  994955 main.go:141] libmachine: (multinode-835787) DBG | domain multinode-835787 has defined MAC address 52:54:00:20:87:3c in network mk-multinode-835787
	I0116 02:33:30.266209  994955 main.go:141] libmachine: (multinode-835787) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:87:3c", ip: ""} in network mk-multinode-835787: {Iface:virbr1 ExpiryTime:2024-01-16 03:33:24 +0000 UTC Type:0 Mac:52:54:00:20:87:3c Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:multinode-835787 Clientid:01:52:54:00:20:87:3c}
	I0116 02:33:30.266246  994955 main.go:141] libmachine: (multinode-835787) DBG | domain multinode-835787 has defined IP address 192.168.39.50 and MAC address 52:54:00:20:87:3c in network mk-multinode-835787
	I0116 02:33:30.266483  994955 profile.go:148] Saving config to /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/multinode-835787/config.json ...
	I0116 02:33:30.266677  994955 machine.go:88] provisioning docker machine ...
	I0116 02:33:30.266695  994955 main.go:141] libmachine: (multinode-835787) Calling .DriverName
	I0116 02:33:30.266899  994955 main.go:141] libmachine: (multinode-835787) Calling .GetMachineName
	I0116 02:33:30.267087  994955 buildroot.go:166] provisioning hostname "multinode-835787"
	I0116 02:33:30.267111  994955 main.go:141] libmachine: (multinode-835787) Calling .GetMachineName
	I0116 02:33:30.267262  994955 main.go:141] libmachine: (multinode-835787) Calling .GetSSHHostname
	I0116 02:33:30.269336  994955 main.go:141] libmachine: (multinode-835787) DBG | domain multinode-835787 has defined MAC address 52:54:00:20:87:3c in network mk-multinode-835787
	I0116 02:33:30.269695  994955 main.go:141] libmachine: (multinode-835787) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:87:3c", ip: ""} in network mk-multinode-835787: {Iface:virbr1 ExpiryTime:2024-01-16 03:33:24 +0000 UTC Type:0 Mac:52:54:00:20:87:3c Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:multinode-835787 Clientid:01:52:54:00:20:87:3c}
	I0116 02:33:30.269725  994955 main.go:141] libmachine: (multinode-835787) DBG | domain multinode-835787 has defined IP address 192.168.39.50 and MAC address 52:54:00:20:87:3c in network mk-multinode-835787
	I0116 02:33:30.269871  994955 main.go:141] libmachine: (multinode-835787) Calling .GetSSHPort
	I0116 02:33:30.270057  994955 main.go:141] libmachine: (multinode-835787) Calling .GetSSHKeyPath
	I0116 02:33:30.270206  994955 main.go:141] libmachine: (multinode-835787) Calling .GetSSHKeyPath
	I0116 02:33:30.270367  994955 main.go:141] libmachine: (multinode-835787) Calling .GetSSHUsername
	I0116 02:33:30.270529  994955 main.go:141] libmachine: Using SSH client type: native
	I0116 02:33:30.270916  994955 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.50 22 <nil> <nil>}
	I0116 02:33:30.270930  994955 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-835787 && echo "multinode-835787" | sudo tee /etc/hostname
	I0116 02:33:30.414876  994955 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-835787
	
	I0116 02:33:30.414907  994955 main.go:141] libmachine: (multinode-835787) Calling .GetSSHHostname
	I0116 02:33:30.417847  994955 main.go:141] libmachine: (multinode-835787) DBG | domain multinode-835787 has defined MAC address 52:54:00:20:87:3c in network mk-multinode-835787
	I0116 02:33:30.418197  994955 main.go:141] libmachine: (multinode-835787) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:87:3c", ip: ""} in network mk-multinode-835787: {Iface:virbr1 ExpiryTime:2024-01-16 03:33:24 +0000 UTC Type:0 Mac:52:54:00:20:87:3c Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:multinode-835787 Clientid:01:52:54:00:20:87:3c}
	I0116 02:33:30.418221  994955 main.go:141] libmachine: (multinode-835787) DBG | domain multinode-835787 has defined IP address 192.168.39.50 and MAC address 52:54:00:20:87:3c in network mk-multinode-835787
	I0116 02:33:30.418372  994955 main.go:141] libmachine: (multinode-835787) Calling .GetSSHPort
	I0116 02:33:30.418610  994955 main.go:141] libmachine: (multinode-835787) Calling .GetSSHKeyPath
	I0116 02:33:30.418780  994955 main.go:141] libmachine: (multinode-835787) Calling .GetSSHKeyPath
	I0116 02:33:30.418974  994955 main.go:141] libmachine: (multinode-835787) Calling .GetSSHUsername
	I0116 02:33:30.419162  994955 main.go:141] libmachine: Using SSH client type: native
	I0116 02:33:30.419490  994955 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.50 22 <nil> <nil>}
	I0116 02:33:30.419508  994955 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-835787' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-835787/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-835787' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0116 02:33:30.558818  994955 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0116 02:33:30.558877  994955 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17967-971255/.minikube CaCertPath:/home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17967-971255/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17967-971255/.minikube}
	I0116 02:33:30.558932  994955 buildroot.go:174] setting up certificates
	I0116 02:33:30.558945  994955 provision.go:83] configureAuth start
	I0116 02:33:30.558969  994955 main.go:141] libmachine: (multinode-835787) Calling .GetMachineName
	I0116 02:33:30.559261  994955 main.go:141] libmachine: (multinode-835787) Calling .GetIP
	I0116 02:33:30.562126  994955 main.go:141] libmachine: (multinode-835787) DBG | domain multinode-835787 has defined MAC address 52:54:00:20:87:3c in network mk-multinode-835787
	I0116 02:33:30.562558  994955 main.go:141] libmachine: (multinode-835787) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:87:3c", ip: ""} in network mk-multinode-835787: {Iface:virbr1 ExpiryTime:2024-01-16 03:33:24 +0000 UTC Type:0 Mac:52:54:00:20:87:3c Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:multinode-835787 Clientid:01:52:54:00:20:87:3c}
	I0116 02:33:30.562592  994955 main.go:141] libmachine: (multinode-835787) DBG | domain multinode-835787 has defined IP address 192.168.39.50 and MAC address 52:54:00:20:87:3c in network mk-multinode-835787
	I0116 02:33:30.562803  994955 main.go:141] libmachine: (multinode-835787) Calling .GetSSHHostname
	I0116 02:33:30.564870  994955 main.go:141] libmachine: (multinode-835787) DBG | domain multinode-835787 has defined MAC address 52:54:00:20:87:3c in network mk-multinode-835787
	I0116 02:33:30.565184  994955 main.go:141] libmachine: (multinode-835787) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:87:3c", ip: ""} in network mk-multinode-835787: {Iface:virbr1 ExpiryTime:2024-01-16 03:33:24 +0000 UTC Type:0 Mac:52:54:00:20:87:3c Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:multinode-835787 Clientid:01:52:54:00:20:87:3c}
	I0116 02:33:30.565219  994955 main.go:141] libmachine: (multinode-835787) DBG | domain multinode-835787 has defined IP address 192.168.39.50 and MAC address 52:54:00:20:87:3c in network mk-multinode-835787
	I0116 02:33:30.565315  994955 provision.go:138] copyHostCerts
	I0116 02:33:30.565341  994955 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17967-971255/.minikube/ca.pem
	I0116 02:33:30.565397  994955 exec_runner.go:144] found /home/jenkins/minikube-integration/17967-971255/.minikube/ca.pem, removing ...
	I0116 02:33:30.565411  994955 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17967-971255/.minikube/ca.pem
	I0116 02:33:30.565497  994955 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17967-971255/.minikube/ca.pem (1082 bytes)
	I0116 02:33:30.565598  994955 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17967-971255/.minikube/cert.pem
	I0116 02:33:30.565618  994955 exec_runner.go:144] found /home/jenkins/minikube-integration/17967-971255/.minikube/cert.pem, removing ...
	I0116 02:33:30.565625  994955 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17967-971255/.minikube/cert.pem
	I0116 02:33:30.565650  994955 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17967-971255/.minikube/cert.pem (1123 bytes)
	I0116 02:33:30.565730  994955 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17967-971255/.minikube/key.pem
	I0116 02:33:30.565747  994955 exec_runner.go:144] found /home/jenkins/minikube-integration/17967-971255/.minikube/key.pem, removing ...
	I0116 02:33:30.565753  994955 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17967-971255/.minikube/key.pem
	I0116 02:33:30.565774  994955 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17967-971255/.minikube/key.pem (1675 bytes)
	I0116 02:33:30.565859  994955 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17967-971255/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca-key.pem org=jenkins.multinode-835787 san=[192.168.39.50 192.168.39.50 localhost 127.0.0.1 minikube multinode-835787]
	I0116 02:33:30.825896  994955 provision.go:172] copyRemoteCerts
	I0116 02:33:30.825994  994955 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0116 02:33:30.826025  994955 main.go:141] libmachine: (multinode-835787) Calling .GetSSHHostname
	I0116 02:33:30.828805  994955 main.go:141] libmachine: (multinode-835787) DBG | domain multinode-835787 has defined MAC address 52:54:00:20:87:3c in network mk-multinode-835787
	I0116 02:33:30.829165  994955 main.go:141] libmachine: (multinode-835787) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:87:3c", ip: ""} in network mk-multinode-835787: {Iface:virbr1 ExpiryTime:2024-01-16 03:33:24 +0000 UTC Type:0 Mac:52:54:00:20:87:3c Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:multinode-835787 Clientid:01:52:54:00:20:87:3c}
	I0116 02:33:30.829195  994955 main.go:141] libmachine: (multinode-835787) DBG | domain multinode-835787 has defined IP address 192.168.39.50 and MAC address 52:54:00:20:87:3c in network mk-multinode-835787
	I0116 02:33:30.829333  994955 main.go:141] libmachine: (multinode-835787) Calling .GetSSHPort
	I0116 02:33:30.829558  994955 main.go:141] libmachine: (multinode-835787) Calling .GetSSHKeyPath
	I0116 02:33:30.829712  994955 main.go:141] libmachine: (multinode-835787) Calling .GetSSHUsername
	I0116 02:33:30.829889  994955 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/multinode-835787/id_rsa Username:docker}
	I0116 02:33:30.922842  994955 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0116 02:33:30.922934  994955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0116 02:33:30.946486  994955 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-971255/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0116 02:33:30.946579  994955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0116 02:33:30.971056  994955 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-971255/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0116 02:33:30.971143  994955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0116 02:33:30.995161  994955 provision.go:86] duration metric: configureAuth took 436.175677ms
	I0116 02:33:30.995192  994955 buildroot.go:189] setting minikube options for container-runtime
	I0116 02:33:30.995448  994955 config.go:182] Loaded profile config "multinode-835787": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 02:33:30.995532  994955 main.go:141] libmachine: (multinode-835787) Calling .GetSSHHostname
	I0116 02:33:30.998472  994955 main.go:141] libmachine: (multinode-835787) DBG | domain multinode-835787 has defined MAC address 52:54:00:20:87:3c in network mk-multinode-835787
	I0116 02:33:30.998816  994955 main.go:141] libmachine: (multinode-835787) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:87:3c", ip: ""} in network mk-multinode-835787: {Iface:virbr1 ExpiryTime:2024-01-16 03:33:24 +0000 UTC Type:0 Mac:52:54:00:20:87:3c Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:multinode-835787 Clientid:01:52:54:00:20:87:3c}
	I0116 02:33:30.998889  994955 main.go:141] libmachine: (multinode-835787) DBG | domain multinode-835787 has defined IP address 192.168.39.50 and MAC address 52:54:00:20:87:3c in network mk-multinode-835787
	I0116 02:33:30.999036  994955 main.go:141] libmachine: (multinode-835787) Calling .GetSSHPort
	I0116 02:33:30.999271  994955 main.go:141] libmachine: (multinode-835787) Calling .GetSSHKeyPath
	I0116 02:33:30.999446  994955 main.go:141] libmachine: (multinode-835787) Calling .GetSSHKeyPath
	I0116 02:33:30.999570  994955 main.go:141] libmachine: (multinode-835787) Calling .GetSSHUsername
	I0116 02:33:30.999723  994955 main.go:141] libmachine: Using SSH client type: native
	I0116 02:33:31.000053  994955 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.50 22 <nil> <nil>}
	I0116 02:33:31.000070  994955 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0116 02:33:31.328924  994955 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0116 02:33:31.328958  994955 machine.go:91] provisioned docker machine in 1.062268364s
	I0116 02:33:31.328969  994955 start.go:300] post-start starting for "multinode-835787" (driver="kvm2")
	I0116 02:33:31.328980  994955 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0116 02:33:31.329011  994955 main.go:141] libmachine: (multinode-835787) Calling .DriverName
	I0116 02:33:31.329352  994955 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0116 02:33:31.329392  994955 main.go:141] libmachine: (multinode-835787) Calling .GetSSHHostname
	I0116 02:33:31.331981  994955 main.go:141] libmachine: (multinode-835787) DBG | domain multinode-835787 has defined MAC address 52:54:00:20:87:3c in network mk-multinode-835787
	I0116 02:33:31.332338  994955 main.go:141] libmachine: (multinode-835787) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:87:3c", ip: ""} in network mk-multinode-835787: {Iface:virbr1 ExpiryTime:2024-01-16 03:33:24 +0000 UTC Type:0 Mac:52:54:00:20:87:3c Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:multinode-835787 Clientid:01:52:54:00:20:87:3c}
	I0116 02:33:31.332359  994955 main.go:141] libmachine: (multinode-835787) DBG | domain multinode-835787 has defined IP address 192.168.39.50 and MAC address 52:54:00:20:87:3c in network mk-multinode-835787
	I0116 02:33:31.332525  994955 main.go:141] libmachine: (multinode-835787) Calling .GetSSHPort
	I0116 02:33:31.332734  994955 main.go:141] libmachine: (multinode-835787) Calling .GetSSHKeyPath
	I0116 02:33:31.332868  994955 main.go:141] libmachine: (multinode-835787) Calling .GetSSHUsername
	I0116 02:33:31.332968  994955 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/multinode-835787/id_rsa Username:docker}
	I0116 02:33:31.427514  994955 ssh_runner.go:195] Run: cat /etc/os-release
	I0116 02:33:31.431730  994955 command_runner.go:130] > NAME=Buildroot
	I0116 02:33:31.431762  994955 command_runner.go:130] > VERSION=2021.02.12-1-g19d536a-dirty
	I0116 02:33:31.431769  994955 command_runner.go:130] > ID=buildroot
	I0116 02:33:31.431777  994955 command_runner.go:130] > VERSION_ID=2021.02.12
	I0116 02:33:31.431784  994955 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0116 02:33:31.431898  994955 info.go:137] Remote host: Buildroot 2021.02.12
	I0116 02:33:31.431923  994955 filesync.go:126] Scanning /home/jenkins/minikube-integration/17967-971255/.minikube/addons for local assets ...
	I0116 02:33:31.431997  994955 filesync.go:126] Scanning /home/jenkins/minikube-integration/17967-971255/.minikube/files for local assets ...
	I0116 02:33:31.432069  994955 filesync.go:149] local asset: /home/jenkins/minikube-integration/17967-971255/.minikube/files/etc/ssl/certs/9784822.pem -> 9784822.pem in /etc/ssl/certs
	I0116 02:33:31.432083  994955 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-971255/.minikube/files/etc/ssl/certs/9784822.pem -> /etc/ssl/certs/9784822.pem
	I0116 02:33:31.432176  994955 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0116 02:33:31.440837  994955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/files/etc/ssl/certs/9784822.pem --> /etc/ssl/certs/9784822.pem (1708 bytes)
	I0116 02:33:31.465443  994955 start.go:303] post-start completed in 136.459113ms
	I0116 02:33:31.465471  994955 fix.go:56] fixHost completed within 19.163918997s
	I0116 02:33:31.465494  994955 main.go:141] libmachine: (multinode-835787) Calling .GetSSHHostname
	I0116 02:33:31.468489  994955 main.go:141] libmachine: (multinode-835787) DBG | domain multinode-835787 has defined MAC address 52:54:00:20:87:3c in network mk-multinode-835787
	I0116 02:33:31.468873  994955 main.go:141] libmachine: (multinode-835787) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:87:3c", ip: ""} in network mk-multinode-835787: {Iface:virbr1 ExpiryTime:2024-01-16 03:33:24 +0000 UTC Type:0 Mac:52:54:00:20:87:3c Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:multinode-835787 Clientid:01:52:54:00:20:87:3c}
	I0116 02:33:31.468898  994955 main.go:141] libmachine: (multinode-835787) DBG | domain multinode-835787 has defined IP address 192.168.39.50 and MAC address 52:54:00:20:87:3c in network mk-multinode-835787
	I0116 02:33:31.469072  994955 main.go:141] libmachine: (multinode-835787) Calling .GetSSHPort
	I0116 02:33:31.469304  994955 main.go:141] libmachine: (multinode-835787) Calling .GetSSHKeyPath
	I0116 02:33:31.469502  994955 main.go:141] libmachine: (multinode-835787) Calling .GetSSHKeyPath
	I0116 02:33:31.469694  994955 main.go:141] libmachine: (multinode-835787) Calling .GetSSHUsername
	I0116 02:33:31.469888  994955 main.go:141] libmachine: Using SSH client type: native
	I0116 02:33:31.470280  994955 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.50 22 <nil> <nil>}
	I0116 02:33:31.470294  994955 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0116 02:33:31.602890  994955 main.go:141] libmachine: SSH cmd err, output: <nil>: 1705372411.551413141
	
	I0116 02:33:31.602922  994955 fix.go:206] guest clock: 1705372411.551413141
	I0116 02:33:31.602931  994955 fix.go:219] Guest: 2024-01-16 02:33:31.551413141 +0000 UTC Remote: 2024-01-16 02:33:31.465475573 +0000 UTC m=+317.321821308 (delta=85.937568ms)
	I0116 02:33:31.602953  994955 fix.go:190] guest clock delta is within tolerance: 85.937568ms
	I0116 02:33:31.602959  994955 start.go:83] releasing machines lock for "multinode-835787", held for 19.301432528s
	I0116 02:33:31.603007  994955 main.go:141] libmachine: (multinode-835787) Calling .DriverName
	I0116 02:33:31.603305  994955 main.go:141] libmachine: (multinode-835787) Calling .GetIP
	I0116 02:33:31.605823  994955 main.go:141] libmachine: (multinode-835787) DBG | domain multinode-835787 has defined MAC address 52:54:00:20:87:3c in network mk-multinode-835787
	I0116 02:33:31.606210  994955 main.go:141] libmachine: (multinode-835787) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:87:3c", ip: ""} in network mk-multinode-835787: {Iface:virbr1 ExpiryTime:2024-01-16 03:33:24 +0000 UTC Type:0 Mac:52:54:00:20:87:3c Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:multinode-835787 Clientid:01:52:54:00:20:87:3c}
	I0116 02:33:31.606239  994955 main.go:141] libmachine: (multinode-835787) DBG | domain multinode-835787 has defined IP address 192.168.39.50 and MAC address 52:54:00:20:87:3c in network mk-multinode-835787
	I0116 02:33:31.606415  994955 main.go:141] libmachine: (multinode-835787) Calling .DriverName
	I0116 02:33:31.606943  994955 main.go:141] libmachine: (multinode-835787) Calling .DriverName
	I0116 02:33:31.607133  994955 main.go:141] libmachine: (multinode-835787) Calling .DriverName
	I0116 02:33:31.607243  994955 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0116 02:33:31.607307  994955 main.go:141] libmachine: (multinode-835787) Calling .GetSSHHostname
	I0116 02:33:31.607345  994955 ssh_runner.go:195] Run: cat /version.json
	I0116 02:33:31.607401  994955 main.go:141] libmachine: (multinode-835787) Calling .GetSSHHostname
	I0116 02:33:31.610204  994955 main.go:141] libmachine: (multinode-835787) DBG | domain multinode-835787 has defined MAC address 52:54:00:20:87:3c in network mk-multinode-835787
	I0116 02:33:31.610235  994955 main.go:141] libmachine: (multinode-835787) DBG | domain multinode-835787 has defined MAC address 52:54:00:20:87:3c in network mk-multinode-835787
	I0116 02:33:31.610545  994955 main.go:141] libmachine: (multinode-835787) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:87:3c", ip: ""} in network mk-multinode-835787: {Iface:virbr1 ExpiryTime:2024-01-16 03:33:24 +0000 UTC Type:0 Mac:52:54:00:20:87:3c Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:multinode-835787 Clientid:01:52:54:00:20:87:3c}
	I0116 02:33:31.610573  994955 main.go:141] libmachine: (multinode-835787) DBG | domain multinode-835787 has defined IP address 192.168.39.50 and MAC address 52:54:00:20:87:3c in network mk-multinode-835787
	I0116 02:33:31.610637  994955 main.go:141] libmachine: (multinode-835787) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:87:3c", ip: ""} in network mk-multinode-835787: {Iface:virbr1 ExpiryTime:2024-01-16 03:33:24 +0000 UTC Type:0 Mac:52:54:00:20:87:3c Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:multinode-835787 Clientid:01:52:54:00:20:87:3c}
	I0116 02:33:31.610661  994955 main.go:141] libmachine: (multinode-835787) DBG | domain multinode-835787 has defined IP address 192.168.39.50 and MAC address 52:54:00:20:87:3c in network mk-multinode-835787
	I0116 02:33:31.610692  994955 main.go:141] libmachine: (multinode-835787) Calling .GetSSHPort
	I0116 02:33:31.610899  994955 main.go:141] libmachine: (multinode-835787) Calling .GetSSHPort
	I0116 02:33:31.610967  994955 main.go:141] libmachine: (multinode-835787) Calling .GetSSHKeyPath
	I0116 02:33:31.611057  994955 main.go:141] libmachine: (multinode-835787) Calling .GetSSHKeyPath
	I0116 02:33:31.611219  994955 main.go:141] libmachine: (multinode-835787) Calling .GetSSHUsername
	I0116 02:33:31.611285  994955 main.go:141] libmachine: (multinode-835787) Calling .GetSSHUsername
	I0116 02:33:31.611397  994955 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/multinode-835787/id_rsa Username:docker}
	I0116 02:33:31.611455  994955 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/multinode-835787/id_rsa Username:docker}
	I0116 02:33:31.704643  994955 command_runner.go:130] > {"iso_version": "v1.32.1-1703784139-17866", "kicbase_version": "v0.0.42-1703723663-17866", "minikube_version": "v1.32.0", "commit": "eb69424d8f623d7cabea57d4395ce87adf1b5fc3"}
	I0116 02:33:31.705202  994955 ssh_runner.go:195] Run: systemctl --version
	I0116 02:33:31.730872  994955 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0116 02:33:31.731539  994955 command_runner.go:130] > systemd 247 (247)
	I0116 02:33:31.731568  994955 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I0116 02:33:31.731636  994955 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0116 02:33:31.884245  994955 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0116 02:33:31.890934  994955 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0116 02:33:31.891369  994955 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0116 02:33:31.891440  994955 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0116 02:33:31.908373  994955 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0116 02:33:31.908765  994955 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0116 02:33:31.908786  994955 start.go:475] detecting cgroup driver to use...
	I0116 02:33:31.908870  994955 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0116 02:33:31.926711  994955 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0116 02:33:31.940500  994955 docker.go:217] disabling cri-docker service (if available) ...
	I0116 02:33:31.940563  994955 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0116 02:33:31.956203  994955 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0116 02:33:31.970881  994955 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0116 02:33:31.985909  994955 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/cri-docker.socket.
	I0116 02:33:32.091600  994955 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0116 02:33:32.108007  994955 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I0116 02:33:32.219486  994955 docker.go:233] disabling docker service ...
	I0116 02:33:32.219554  994955 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0116 02:33:32.232831  994955 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0116 02:33:32.244899  994955 command_runner.go:130] ! Failed to stop docker.service: Unit docker.service not loaded.
	I0116 02:33:32.245473  994955 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0116 02:33:32.362314  994955 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0116 02:33:32.362424  994955 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0116 02:33:32.480608  994955 command_runner.go:130] ! Unit docker.service does not exist, proceeding anyway.
	I0116 02:33:32.480646  994955 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0116 02:33:32.480734  994955 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0116 02:33:32.493952  994955 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0116 02:33:32.511034  994955 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0116 02:33:32.511383  994955 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0116 02:33:32.511459  994955 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 02:33:32.521892  994955 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0116 02:33:32.521967  994955 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 02:33:32.532465  994955 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 02:33:32.542619  994955 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 02:33:32.553092  994955 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0116 02:33:32.563645  994955 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0116 02:33:32.572765  994955 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0116 02:33:32.572820  994955 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0116 02:33:32.572874  994955 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0116 02:33:32.587042  994955 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0116 02:33:32.596266  994955 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0116 02:33:32.710873  994955 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0116 02:33:32.879520  994955 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0116 02:33:32.879611  994955 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0116 02:33:32.884531  994955 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0116 02:33:32.884566  994955 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0116 02:33:32.884576  994955 command_runner.go:130] > Device: 16h/22d	Inode: 781         Links: 1
	I0116 02:33:32.884586  994955 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0116 02:33:32.884595  994955 command_runner.go:130] > Access: 2024-01-16 02:33:32.814611593 +0000
	I0116 02:33:32.884604  994955 command_runner.go:130] > Modify: 2024-01-16 02:33:32.814611593 +0000
	I0116 02:33:32.884611  994955 command_runner.go:130] > Change: 2024-01-16 02:33:32.814611593 +0000
	I0116 02:33:32.884618  994955 command_runner.go:130] >  Birth: -
	I0116 02:33:32.884682  994955 start.go:543] Will wait 60s for crictl version
	I0116 02:33:32.884740  994955 ssh_runner.go:195] Run: which crictl
	I0116 02:33:32.888291  994955 command_runner.go:130] > /usr/bin/crictl
	I0116 02:33:32.888363  994955 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0116 02:33:32.929339  994955 command_runner.go:130] > Version:  0.1.0
	I0116 02:33:32.929367  994955 command_runner.go:130] > RuntimeName:  cri-o
	I0116 02:33:32.929372  994955 command_runner.go:130] > RuntimeVersion:  1.24.1
	I0116 02:33:32.929390  994955 command_runner.go:130] > RuntimeApiVersion:  v1
	I0116 02:33:32.932116  994955 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0116 02:33:32.932199  994955 ssh_runner.go:195] Run: crio --version
	I0116 02:33:32.980591  994955 command_runner.go:130] > crio version 1.24.1
	I0116 02:33:32.980617  994955 command_runner.go:130] > Version:          1.24.1
	I0116 02:33:32.980627  994955 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0116 02:33:32.980632  994955 command_runner.go:130] > GitTreeState:     dirty
	I0116 02:33:32.980639  994955 command_runner.go:130] > BuildDate:        2023-12-28T22:46:29Z
	I0116 02:33:32.980644  994955 command_runner.go:130] > GoVersion:        go1.19.9
	I0116 02:33:32.980648  994955 command_runner.go:130] > Compiler:         gc
	I0116 02:33:32.980653  994955 command_runner.go:130] > Platform:         linux/amd64
	I0116 02:33:32.980661  994955 command_runner.go:130] > Linkmode:         dynamic
	I0116 02:33:32.980668  994955 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0116 02:33:32.980675  994955 command_runner.go:130] > SeccompEnabled:   true
	I0116 02:33:32.980680  994955 command_runner.go:130] > AppArmorEnabled:  false
	I0116 02:33:32.982156  994955 ssh_runner.go:195] Run: crio --version
	I0116 02:33:33.029949  994955 command_runner.go:130] > crio version 1.24.1
	I0116 02:33:33.029977  994955 command_runner.go:130] > Version:          1.24.1
	I0116 02:33:33.029985  994955 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0116 02:33:33.029990  994955 command_runner.go:130] > GitTreeState:     dirty
	I0116 02:33:33.029995  994955 command_runner.go:130] > BuildDate:        2023-12-28T22:46:29Z
	I0116 02:33:33.030013  994955 command_runner.go:130] > GoVersion:        go1.19.9
	I0116 02:33:33.030019  994955 command_runner.go:130] > Compiler:         gc
	I0116 02:33:33.030027  994955 command_runner.go:130] > Platform:         linux/amd64
	I0116 02:33:33.030034  994955 command_runner.go:130] > Linkmode:         dynamic
	I0116 02:33:33.030045  994955 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0116 02:33:33.030056  994955 command_runner.go:130] > SeccompEnabled:   true
	I0116 02:33:33.030063  994955 command_runner.go:130] > AppArmorEnabled:  false
	I0116 02:33:33.033463  994955 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0116 02:33:33.034904  994955 main.go:141] libmachine: (multinode-835787) Calling .GetIP
	I0116 02:33:33.037751  994955 main.go:141] libmachine: (multinode-835787) DBG | domain multinode-835787 has defined MAC address 52:54:00:20:87:3c in network mk-multinode-835787
	I0116 02:33:33.038120  994955 main.go:141] libmachine: (multinode-835787) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:87:3c", ip: ""} in network mk-multinode-835787: {Iface:virbr1 ExpiryTime:2024-01-16 03:33:24 +0000 UTC Type:0 Mac:52:54:00:20:87:3c Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:multinode-835787 Clientid:01:52:54:00:20:87:3c}
	I0116 02:33:33.038150  994955 main.go:141] libmachine: (multinode-835787) DBG | domain multinode-835787 has defined IP address 192.168.39.50 and MAC address 52:54:00:20:87:3c in network mk-multinode-835787
	I0116 02:33:33.038443  994955 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0116 02:33:33.042645  994955 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 02:33:33.054351  994955 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0116 02:33:33.054409  994955 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 02:33:33.095338  994955 command_runner.go:130] > {
	I0116 02:33:33.095370  994955 command_runner.go:130] >   "images": [
	I0116 02:33:33.095379  994955 command_runner.go:130] >     {
	I0116 02:33:33.095391  994955 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0116 02:33:33.095405  994955 command_runner.go:130] >       "repoTags": [
	I0116 02:33:33.095415  994955 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0116 02:33:33.095421  994955 command_runner.go:130] >       ],
	I0116 02:33:33.095427  994955 command_runner.go:130] >       "repoDigests": [
	I0116 02:33:33.095449  994955 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0116 02:33:33.095464  994955 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0116 02:33:33.095473  994955 command_runner.go:130] >       ],
	I0116 02:33:33.095482  994955 command_runner.go:130] >       "size": "750414",
	I0116 02:33:33.095491  994955 command_runner.go:130] >       "uid": {
	I0116 02:33:33.095496  994955 command_runner.go:130] >         "value": "65535"
	I0116 02:33:33.095500  994955 command_runner.go:130] >       },
	I0116 02:33:33.095509  994955 command_runner.go:130] >       "username": "",
	I0116 02:33:33.095514  994955 command_runner.go:130] >       "spec": null,
	I0116 02:33:33.095520  994955 command_runner.go:130] >       "pinned": false
	I0116 02:33:33.095524  994955 command_runner.go:130] >     }
	I0116 02:33:33.095530  994955 command_runner.go:130] >   ]
	I0116 02:33:33.095533  994955 command_runner.go:130] > }
	I0116 02:33:33.095703  994955 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0116 02:33:33.095775  994955 ssh_runner.go:195] Run: which lz4
	I0116 02:33:33.099801  994955 command_runner.go:130] > /usr/bin/lz4
	I0116 02:33:33.099835  994955 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-971255/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0116 02:33:33.099937  994955 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0116 02:33:33.103816  994955 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0116 02:33:33.103959  994955 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0116 02:33:33.103983  994955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0116 02:33:34.965302  994955 crio.go:444] Took 1.865400 seconds to copy over tarball
	I0116 02:33:34.965376  994955 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0116 02:33:37.802726  994955 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.83730775s)
	I0116 02:33:37.802767  994955 crio.go:451] Took 2.837436 seconds to extract the tarball
	I0116 02:33:37.802777  994955 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0116 02:33:37.845133  994955 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 02:33:37.899008  994955 command_runner.go:130] > {
	I0116 02:33:37.899044  994955 command_runner.go:130] >   "images": [
	I0116 02:33:37.899052  994955 command_runner.go:130] >     {
	I0116 02:33:37.899065  994955 command_runner.go:130] >       "id": "c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc",
	I0116 02:33:37.899074  994955 command_runner.go:130] >       "repoTags": [
	I0116 02:33:37.899084  994955 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I0116 02:33:37.899095  994955 command_runner.go:130] >       ],
	I0116 02:33:37.899108  994955 command_runner.go:130] >       "repoDigests": [
	I0116 02:33:37.899133  994955 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I0116 02:33:37.899152  994955 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"
	I0116 02:33:37.899160  994955 command_runner.go:130] >       ],
	I0116 02:33:37.899168  994955 command_runner.go:130] >       "size": "65258016",
	I0116 02:33:37.899175  994955 command_runner.go:130] >       "uid": null,
	I0116 02:33:37.899182  994955 command_runner.go:130] >       "username": "",
	I0116 02:33:37.899203  994955 command_runner.go:130] >       "spec": null,
	I0116 02:33:37.899215  994955 command_runner.go:130] >       "pinned": false
	I0116 02:33:37.899222  994955 command_runner.go:130] >     },
	I0116 02:33:37.899232  994955 command_runner.go:130] >     {
	I0116 02:33:37.899242  994955 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0116 02:33:37.899251  994955 command_runner.go:130] >       "repoTags": [
	I0116 02:33:37.899256  994955 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0116 02:33:37.899260  994955 command_runner.go:130] >       ],
	I0116 02:33:37.899265  994955 command_runner.go:130] >       "repoDigests": [
	I0116 02:33:37.899273  994955 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0116 02:33:37.899294  994955 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0116 02:33:37.899306  994955 command_runner.go:130] >       ],
	I0116 02:33:37.899318  994955 command_runner.go:130] >       "size": "31470524",
	I0116 02:33:37.899322  994955 command_runner.go:130] >       "uid": null,
	I0116 02:33:37.899326  994955 command_runner.go:130] >       "username": "",
	I0116 02:33:37.899330  994955 command_runner.go:130] >       "spec": null,
	I0116 02:33:37.899334  994955 command_runner.go:130] >       "pinned": false
	I0116 02:33:37.899338  994955 command_runner.go:130] >     },
	I0116 02:33:37.899341  994955 command_runner.go:130] >     {
	I0116 02:33:37.899347  994955 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I0116 02:33:37.899352  994955 command_runner.go:130] >       "repoTags": [
	I0116 02:33:37.899357  994955 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I0116 02:33:37.899361  994955 command_runner.go:130] >       ],
	I0116 02:33:37.899365  994955 command_runner.go:130] >       "repoDigests": [
	I0116 02:33:37.899377  994955 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I0116 02:33:37.899390  994955 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I0116 02:33:37.899401  994955 command_runner.go:130] >       ],
	I0116 02:33:37.899410  994955 command_runner.go:130] >       "size": "53621675",
	I0116 02:33:37.899425  994955 command_runner.go:130] >       "uid": null,
	I0116 02:33:37.899434  994955 command_runner.go:130] >       "username": "",
	I0116 02:33:37.899446  994955 command_runner.go:130] >       "spec": null,
	I0116 02:33:37.899454  994955 command_runner.go:130] >       "pinned": false
	I0116 02:33:37.899462  994955 command_runner.go:130] >     },
	I0116 02:33:37.899468  994955 command_runner.go:130] >     {
	I0116 02:33:37.899477  994955 command_runner.go:130] >       "id": "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9",
	I0116 02:33:37.899481  994955 command_runner.go:130] >       "repoTags": [
	I0116 02:33:37.899487  994955 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I0116 02:33:37.899490  994955 command_runner.go:130] >       ],
	I0116 02:33:37.899495  994955 command_runner.go:130] >       "repoDigests": [
	I0116 02:33:37.899502  994955 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15",
	I0116 02:33:37.899509  994955 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"
	I0116 02:33:37.899522  994955 command_runner.go:130] >       ],
	I0116 02:33:37.899531  994955 command_runner.go:130] >       "size": "295456551",
	I0116 02:33:37.899538  994955 command_runner.go:130] >       "uid": {
	I0116 02:33:37.899543  994955 command_runner.go:130] >         "value": "0"
	I0116 02:33:37.899547  994955 command_runner.go:130] >       },
	I0116 02:33:37.899557  994955 command_runner.go:130] >       "username": "",
	I0116 02:33:37.899561  994955 command_runner.go:130] >       "spec": null,
	I0116 02:33:37.899566  994955 command_runner.go:130] >       "pinned": false
	I0116 02:33:37.899572  994955 command_runner.go:130] >     },
	I0116 02:33:37.899576  994955 command_runner.go:130] >     {
	I0116 02:33:37.899583  994955 command_runner.go:130] >       "id": "7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257",
	I0116 02:33:37.899590  994955 command_runner.go:130] >       "repoTags": [
	I0116 02:33:37.899595  994955 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.4"
	I0116 02:33:37.899599  994955 command_runner.go:130] >       ],
	I0116 02:33:37.899604  994955 command_runner.go:130] >       "repoDigests": [
	I0116 02:33:37.899621  994955 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499",
	I0116 02:33:37.899631  994955 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"
	I0116 02:33:37.899637  994955 command_runner.go:130] >       ],
	I0116 02:33:37.899642  994955 command_runner.go:130] >       "size": "127226832",
	I0116 02:33:37.899649  994955 command_runner.go:130] >       "uid": {
	I0116 02:33:37.899653  994955 command_runner.go:130] >         "value": "0"
	I0116 02:33:37.899659  994955 command_runner.go:130] >       },
	I0116 02:33:37.899664  994955 command_runner.go:130] >       "username": "",
	I0116 02:33:37.899673  994955 command_runner.go:130] >       "spec": null,
	I0116 02:33:37.899681  994955 command_runner.go:130] >       "pinned": false
	I0116 02:33:37.899685  994955 command_runner.go:130] >     },
	I0116 02:33:37.899689  994955 command_runner.go:130] >     {
	I0116 02:33:37.899698  994955 command_runner.go:130] >       "id": "d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591",
	I0116 02:33:37.899705  994955 command_runner.go:130] >       "repoTags": [
	I0116 02:33:37.899715  994955 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.4"
	I0116 02:33:37.899722  994955 command_runner.go:130] >       ],
	I0116 02:33:37.899727  994955 command_runner.go:130] >       "repoDigests": [
	I0116 02:33:37.899737  994955 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c",
	I0116 02:33:37.899747  994955 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232"
	I0116 02:33:37.899754  994955 command_runner.go:130] >       ],
	I0116 02:33:37.899758  994955 command_runner.go:130] >       "size": "123261750",
	I0116 02:33:37.899765  994955 command_runner.go:130] >       "uid": {
	I0116 02:33:37.899769  994955 command_runner.go:130] >         "value": "0"
	I0116 02:33:37.899773  994955 command_runner.go:130] >       },
	I0116 02:33:37.899780  994955 command_runner.go:130] >       "username": "",
	I0116 02:33:37.899784  994955 command_runner.go:130] >       "spec": null,
	I0116 02:33:37.899794  994955 command_runner.go:130] >       "pinned": false
	I0116 02:33:37.899802  994955 command_runner.go:130] >     },
	I0116 02:33:37.899806  994955 command_runner.go:130] >     {
	I0116 02:33:37.899815  994955 command_runner.go:130] >       "id": "83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e",
	I0116 02:33:37.899822  994955 command_runner.go:130] >       "repoTags": [
	I0116 02:33:37.899827  994955 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.4"
	I0116 02:33:37.899833  994955 command_runner.go:130] >       ],
	I0116 02:33:37.899838  994955 command_runner.go:130] >       "repoDigests": [
	I0116 02:33:37.899848  994955 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e",
	I0116 02:33:37.899857  994955 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"
	I0116 02:33:37.899866  994955 command_runner.go:130] >       ],
	I0116 02:33:37.899873  994955 command_runner.go:130] >       "size": "74749335",
	I0116 02:33:37.899877  994955 command_runner.go:130] >       "uid": null,
	I0116 02:33:37.899885  994955 command_runner.go:130] >       "username": "",
	I0116 02:33:37.899889  994955 command_runner.go:130] >       "spec": null,
	I0116 02:33:37.899897  994955 command_runner.go:130] >       "pinned": false
	I0116 02:33:37.899903  994955 command_runner.go:130] >     },
	I0116 02:33:37.899914  994955 command_runner.go:130] >     {
	I0116 02:33:37.899929  994955 command_runner.go:130] >       "id": "e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1",
	I0116 02:33:37.899941  994955 command_runner.go:130] >       "repoTags": [
	I0116 02:33:37.899951  994955 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.4"
	I0116 02:33:37.899961  994955 command_runner.go:130] >       ],
	I0116 02:33:37.899972  994955 command_runner.go:130] >       "repoDigests": [
	I0116 02:33:37.900004  994955 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba",
	I0116 02:33:37.900021  994955 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32"
	I0116 02:33:37.900030  994955 command_runner.go:130] >       ],
	I0116 02:33:37.900035  994955 command_runner.go:130] >       "size": "61551410",
	I0116 02:33:37.900042  994955 command_runner.go:130] >       "uid": {
	I0116 02:33:37.900046  994955 command_runner.go:130] >         "value": "0"
	I0116 02:33:37.900052  994955 command_runner.go:130] >       },
	I0116 02:33:37.900057  994955 command_runner.go:130] >       "username": "",
	I0116 02:33:37.900064  994955 command_runner.go:130] >       "spec": null,
	I0116 02:33:37.900068  994955 command_runner.go:130] >       "pinned": false
	I0116 02:33:37.900074  994955 command_runner.go:130] >     },
	I0116 02:33:37.900078  994955 command_runner.go:130] >     {
	I0116 02:33:37.900087  994955 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0116 02:33:37.900097  994955 command_runner.go:130] >       "repoTags": [
	I0116 02:33:37.900106  994955 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0116 02:33:37.900111  994955 command_runner.go:130] >       ],
	I0116 02:33:37.900121  994955 command_runner.go:130] >       "repoDigests": [
	I0116 02:33:37.900128  994955 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0116 02:33:37.900135  994955 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0116 02:33:37.900146  994955 command_runner.go:130] >       ],
	I0116 02:33:37.900150  994955 command_runner.go:130] >       "size": "750414",
	I0116 02:33:37.900154  994955 command_runner.go:130] >       "uid": {
	I0116 02:33:37.900158  994955 command_runner.go:130] >         "value": "65535"
	I0116 02:33:37.900162  994955 command_runner.go:130] >       },
	I0116 02:33:37.900166  994955 command_runner.go:130] >       "username": "",
	I0116 02:33:37.900170  994955 command_runner.go:130] >       "spec": null,
	I0116 02:33:37.900173  994955 command_runner.go:130] >       "pinned": false
	I0116 02:33:37.900177  994955 command_runner.go:130] >     }
	I0116 02:33:37.900180  994955 command_runner.go:130] >   ]
	I0116 02:33:37.900183  994955 command_runner.go:130] > }
	I0116 02:33:37.900311  994955 crio.go:496] all images are preloaded for cri-o runtime.
	I0116 02:33:37.900322  994955 cache_images.go:84] Images are preloaded, skipping loading
	I0116 02:33:37.900388  994955 ssh_runner.go:195] Run: crio config
	I0116 02:33:37.949381  994955 command_runner.go:130] ! time="2024-01-16 02:33:37.897485664Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I0116 02:33:37.949417  994955 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0116 02:33:37.957229  994955 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0116 02:33:37.957265  994955 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0116 02:33:37.957273  994955 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0116 02:33:37.957278  994955 command_runner.go:130] > #
	I0116 02:33:37.957288  994955 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0116 02:33:37.957297  994955 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0116 02:33:37.957308  994955 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0116 02:33:37.957323  994955 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0116 02:33:37.957334  994955 command_runner.go:130] > # reload'.
	I0116 02:33:37.957345  994955 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0116 02:33:37.957352  994955 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0116 02:33:37.957359  994955 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0116 02:33:37.957367  994955 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0116 02:33:37.957371  994955 command_runner.go:130] > [crio]
	I0116 02:33:37.957377  994955 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0116 02:33:37.957387  994955 command_runner.go:130] > # containers images, in this directory.
	I0116 02:33:37.957398  994955 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0116 02:33:37.957421  994955 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0116 02:33:37.957433  994955 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0116 02:33:37.957443  994955 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0116 02:33:37.957450  994955 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0116 02:33:37.957457  994955 command_runner.go:130] > storage_driver = "overlay"
	I0116 02:33:37.957463  994955 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0116 02:33:37.957473  994955 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0116 02:33:37.957478  994955 command_runner.go:130] > storage_option = [
	I0116 02:33:37.957484  994955 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0116 02:33:37.957490  994955 command_runner.go:130] > ]
	I0116 02:33:37.957501  994955 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0116 02:33:37.957518  994955 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0116 02:33:37.957529  994955 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0116 02:33:37.957538  994955 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0116 02:33:37.957551  994955 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0116 02:33:37.957558  994955 command_runner.go:130] > # always happen on a node reboot
	I0116 02:33:37.957565  994955 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0116 02:33:37.957575  994955 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0116 02:33:37.957588  994955 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0116 02:33:37.957612  994955 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0116 02:33:37.957624  994955 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0116 02:33:37.957638  994955 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0116 02:33:37.957652  994955 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0116 02:33:37.957659  994955 command_runner.go:130] > # internal_wipe = true
	I0116 02:33:37.957672  994955 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0116 02:33:37.957686  994955 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0116 02:33:37.957699  994955 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0116 02:33:37.957711  994955 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0116 02:33:37.957723  994955 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0116 02:33:37.957732  994955 command_runner.go:130] > [crio.api]
	I0116 02:33:37.957743  994955 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0116 02:33:37.957754  994955 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0116 02:33:37.957767  994955 command_runner.go:130] > # IP address on which the stream server will listen.
	I0116 02:33:37.957778  994955 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0116 02:33:37.957795  994955 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0116 02:33:37.957821  994955 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0116 02:33:37.957832  994955 command_runner.go:130] > # stream_port = "0"
	I0116 02:33:37.957842  994955 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0116 02:33:37.957852  994955 command_runner.go:130] > # stream_enable_tls = false
	I0116 02:33:37.957865  994955 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0116 02:33:37.957876  994955 command_runner.go:130] > # stream_idle_timeout = ""
	I0116 02:33:37.957886  994955 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0116 02:33:37.957898  994955 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0116 02:33:37.957907  994955 command_runner.go:130] > # minutes.
	I0116 02:33:37.957919  994955 command_runner.go:130] > # stream_tls_cert = ""
	I0116 02:33:37.957932  994955 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0116 02:33:37.957944  994955 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0116 02:33:37.957955  994955 command_runner.go:130] > # stream_tls_key = ""
	I0116 02:33:37.957967  994955 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0116 02:33:37.957976  994955 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0116 02:33:37.957987  994955 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0116 02:33:37.957998  994955 command_runner.go:130] > # stream_tls_ca = ""
	I0116 02:33:37.958021  994955 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0116 02:33:37.958032  994955 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0116 02:33:37.958047  994955 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0116 02:33:37.958057  994955 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0116 02:33:37.958114  994955 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0116 02:33:37.958133  994955 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0116 02:33:37.958139  994955 command_runner.go:130] > [crio.runtime]
	I0116 02:33:37.958150  994955 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0116 02:33:37.958163  994955 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0116 02:33:37.958174  994955 command_runner.go:130] > # "nofile=1024:2048"
	I0116 02:33:37.958187  994955 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0116 02:33:37.958198  994955 command_runner.go:130] > # default_ulimits = [
	I0116 02:33:37.958207  994955 command_runner.go:130] > # ]
	I0116 02:33:37.958225  994955 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0116 02:33:37.958235  994955 command_runner.go:130] > # no_pivot = false
	I0116 02:33:37.958243  994955 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0116 02:33:37.958257  994955 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0116 02:33:37.958269  994955 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0116 02:33:37.958285  994955 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0116 02:33:37.958297  994955 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0116 02:33:37.958311  994955 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0116 02:33:37.958320  994955 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0116 02:33:37.958325  994955 command_runner.go:130] > # Cgroup setting for conmon
	I0116 02:33:37.958339  994955 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0116 02:33:37.958350  994955 command_runner.go:130] > conmon_cgroup = "pod"
	I0116 02:33:37.958364  994955 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0116 02:33:37.958376  994955 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0116 02:33:37.958390  994955 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0116 02:33:37.958399  994955 command_runner.go:130] > conmon_env = [
	I0116 02:33:37.958408  994955 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0116 02:33:37.958412  994955 command_runner.go:130] > ]
	I0116 02:33:37.958424  994955 command_runner.go:130] > # Additional environment variables to set for all the
	I0116 02:33:37.958437  994955 command_runner.go:130] > # containers. These are overridden if set in the
	I0116 02:33:37.958449  994955 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0116 02:33:37.958459  994955 command_runner.go:130] > # default_env = [
	I0116 02:33:37.958468  994955 command_runner.go:130] > # ]
	I0116 02:33:37.958483  994955 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0116 02:33:37.958491  994955 command_runner.go:130] > # selinux = false
	I0116 02:33:37.958498  994955 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0116 02:33:37.958512  994955 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0116 02:33:37.958525  994955 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0116 02:33:37.958535  994955 command_runner.go:130] > # seccomp_profile = ""
	I0116 02:33:37.958552  994955 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0116 02:33:37.958565  994955 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0116 02:33:37.958579  994955 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0116 02:33:37.958589  994955 command_runner.go:130] > # which might increase security.
	I0116 02:33:37.958600  994955 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0116 02:33:37.958614  994955 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0116 02:33:37.958628  994955 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0116 02:33:37.958641  994955 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0116 02:33:37.958654  994955 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0116 02:33:37.958663  994955 command_runner.go:130] > # This option supports live configuration reload.
	I0116 02:33:37.958670  994955 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0116 02:33:37.958683  994955 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0116 02:33:37.958698  994955 command_runner.go:130] > # the cgroup blockio controller.
	I0116 02:33:37.958708  994955 command_runner.go:130] > # blockio_config_file = ""
	I0116 02:33:37.958719  994955 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0116 02:33:37.958729  994955 command_runner.go:130] > # irqbalance daemon.
	I0116 02:33:37.958740  994955 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0116 02:33:37.958750  994955 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0116 02:33:37.958760  994955 command_runner.go:130] > # This option supports live configuration reload.
	I0116 02:33:37.958771  994955 command_runner.go:130] > # rdt_config_file = ""
	I0116 02:33:37.958784  994955 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0116 02:33:37.958795  994955 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0116 02:33:37.958805  994955 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0116 02:33:37.958815  994955 command_runner.go:130] > # separate_pull_cgroup = ""
	I0116 02:33:37.958829  994955 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0116 02:33:37.958838  994955 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0116 02:33:37.958846  994955 command_runner.go:130] > # will be added.
	I0116 02:33:37.958856  994955 command_runner.go:130] > # default_capabilities = [
	I0116 02:33:37.958883  994955 command_runner.go:130] > # 	"CHOWN",
	I0116 02:33:37.958893  994955 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0116 02:33:37.958905  994955 command_runner.go:130] > # 	"FSETID",
	I0116 02:33:37.958915  994955 command_runner.go:130] > # 	"FOWNER",
	I0116 02:33:37.958924  994955 command_runner.go:130] > # 	"SETGID",
	I0116 02:33:37.958932  994955 command_runner.go:130] > # 	"SETUID",
	I0116 02:33:37.958942  994955 command_runner.go:130] > # 	"SETPCAP",
	I0116 02:33:37.958953  994955 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0116 02:33:37.958963  994955 command_runner.go:130] > # 	"KILL",
	I0116 02:33:37.958972  994955 command_runner.go:130] > # ]
	I0116 02:33:37.958985  994955 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0116 02:33:37.958997  994955 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0116 02:33:37.959006  994955 command_runner.go:130] > # default_sysctls = [
	I0116 02:33:37.959012  994955 command_runner.go:130] > # ]
	I0116 02:33:37.959020  994955 command_runner.go:130] > # List of devices on the host that a
	I0116 02:33:37.959033  994955 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0116 02:33:37.959044  994955 command_runner.go:130] > # allowed_devices = [
	I0116 02:33:37.959054  994955 command_runner.go:130] > # 	"/dev/fuse",
	I0116 02:33:37.959063  994955 command_runner.go:130] > # ]
	I0116 02:33:37.959074  994955 command_runner.go:130] > # List of additional devices. specified as
	I0116 02:33:37.959093  994955 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0116 02:33:37.959101  994955 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0116 02:33:37.959148  994955 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0116 02:33:37.959162  994955 command_runner.go:130] > # additional_devices = [
	I0116 02:33:37.959168  994955 command_runner.go:130] > # ]
	I0116 02:33:37.959176  994955 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0116 02:33:37.959184  994955 command_runner.go:130] > # cdi_spec_dirs = [
	I0116 02:33:37.959188  994955 command_runner.go:130] > # 	"/etc/cdi",
	I0116 02:33:37.959198  994955 command_runner.go:130] > # 	"/var/run/cdi",
	I0116 02:33:37.959208  994955 command_runner.go:130] > # ]
	I0116 02:33:37.959245  994955 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0116 02:33:37.959258  994955 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0116 02:33:37.959265  994955 command_runner.go:130] > # Defaults to false.
	I0116 02:33:37.959272  994955 command_runner.go:130] > # device_ownership_from_security_context = false
	I0116 02:33:37.959281  994955 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0116 02:33:37.959295  994955 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0116 02:33:37.959306  994955 command_runner.go:130] > # hooks_dir = [
	I0116 02:33:37.959317  994955 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0116 02:33:37.959329  994955 command_runner.go:130] > # ]
	I0116 02:33:37.959342  994955 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0116 02:33:37.959354  994955 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0116 02:33:37.959362  994955 command_runner.go:130] > # its default mounts from the following two files:
	I0116 02:33:37.959370  994955 command_runner.go:130] > #
	I0116 02:33:37.959385  994955 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0116 02:33:37.959398  994955 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0116 02:33:37.959411  994955 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0116 02:33:37.959419  994955 command_runner.go:130] > #
	I0116 02:33:37.959430  994955 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0116 02:33:37.959442  994955 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0116 02:33:37.959451  994955 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0116 02:33:37.959463  994955 command_runner.go:130] > #      only add mounts it finds in this file.
	I0116 02:33:37.959472  994955 command_runner.go:130] > #
	I0116 02:33:37.959483  994955 command_runner.go:130] > # default_mounts_file = ""
	I0116 02:33:37.959495  994955 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0116 02:33:37.959508  994955 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0116 02:33:37.959518  994955 command_runner.go:130] > pids_limit = 1024
	I0116 02:33:37.959533  994955 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0116 02:33:37.959546  994955 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0116 02:33:37.959561  994955 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0116 02:33:37.959577  994955 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0116 02:33:37.959587  994955 command_runner.go:130] > # log_size_max = -1
	I0116 02:33:37.959601  994955 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0116 02:33:37.959611  994955 command_runner.go:130] > # log_to_journald = false
	I0116 02:33:37.959620  994955 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0116 02:33:37.959631  994955 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0116 02:33:37.959644  994955 command_runner.go:130] > # Path to directory for container attach sockets.
	I0116 02:33:37.959656  994955 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0116 02:33:37.959668  994955 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0116 02:33:37.959678  994955 command_runner.go:130] > # bind_mount_prefix = ""
	I0116 02:33:37.959690  994955 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0116 02:33:37.959699  994955 command_runner.go:130] > # read_only = false
	I0116 02:33:37.959708  994955 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0116 02:33:37.959720  994955 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0116 02:33:37.959731  994955 command_runner.go:130] > # live configuration reload.
	I0116 02:33:37.959745  994955 command_runner.go:130] > # log_level = "info"
	I0116 02:33:37.959757  994955 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0116 02:33:37.959769  994955 command_runner.go:130] > # This option supports live configuration reload.
	I0116 02:33:37.959779  994955 command_runner.go:130] > # log_filter = ""
	I0116 02:33:37.959790  994955 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0116 02:33:37.959800  994955 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0116 02:33:37.959811  994955 command_runner.go:130] > # separated by comma.
	I0116 02:33:37.959821  994955 command_runner.go:130] > # uid_mappings = ""
	I0116 02:33:37.959832  994955 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0116 02:33:37.959845  994955 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0116 02:33:37.959855  994955 command_runner.go:130] > # separated by comma.
	I0116 02:33:37.959865  994955 command_runner.go:130] > # gid_mappings = ""
	I0116 02:33:37.959876  994955 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0116 02:33:37.959886  994955 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0116 02:33:37.959898  994955 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0116 02:33:37.959910  994955 command_runner.go:130] > # minimum_mappable_uid = -1
	I0116 02:33:37.959923  994955 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0116 02:33:37.959937  994955 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0116 02:33:37.959954  994955 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0116 02:33:37.959963  994955 command_runner.go:130] > # minimum_mappable_gid = -1
	I0116 02:33:37.959972  994955 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0116 02:33:37.959984  994955 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0116 02:33:37.959998  994955 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0116 02:33:37.960008  994955 command_runner.go:130] > # ctr_stop_timeout = 30
	I0116 02:33:37.960018  994955 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0116 02:33:37.960033  994955 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0116 02:33:37.960044  994955 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0116 02:33:37.960053  994955 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0116 02:33:37.960065  994955 command_runner.go:130] > drop_infra_ctr = false
	I0116 02:33:37.960080  994955 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0116 02:33:37.960093  994955 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0116 02:33:37.960107  994955 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0116 02:33:37.960117  994955 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0116 02:33:37.960130  994955 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0116 02:33:37.960138  994955 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0116 02:33:37.960145  994955 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0116 02:33:37.960160  994955 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0116 02:33:37.960175  994955 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0116 02:33:37.960188  994955 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0116 02:33:37.960202  994955 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0116 02:33:37.960219  994955 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0116 02:33:37.960227  994955 command_runner.go:130] > # default_runtime = "runc"
	I0116 02:33:37.960233  994955 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0116 02:33:37.960249  994955 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0116 02:33:37.960267  994955 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0116 02:33:37.960278  994955 command_runner.go:130] > # creation as a file is not desired either.
	I0116 02:33:37.960293  994955 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0116 02:33:37.960306  994955 command_runner.go:130] > # the hostname is being managed dynamically.
	I0116 02:33:37.960314  994955 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0116 02:33:37.960318  994955 command_runner.go:130] > # ]
	I0116 02:33:37.960332  994955 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0116 02:33:37.960346  994955 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0116 02:33:37.960359  994955 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0116 02:33:37.960373  994955 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0116 02:33:37.960386  994955 command_runner.go:130] > #
	I0116 02:33:37.960396  994955 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0116 02:33:37.960403  994955 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0116 02:33:37.960414  994955 command_runner.go:130] > #  runtime_type = "oci"
	I0116 02:33:37.960425  994955 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0116 02:33:37.960437  994955 command_runner.go:130] > #  privileged_without_host_devices = false
	I0116 02:33:37.960447  994955 command_runner.go:130] > #  allowed_annotations = []
	I0116 02:33:37.960456  994955 command_runner.go:130] > # Where:
	I0116 02:33:37.960465  994955 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0116 02:33:37.960479  994955 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0116 02:33:37.960488  994955 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0116 02:33:37.960501  994955 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0116 02:33:37.960515  994955 command_runner.go:130] > #   in $PATH.
	I0116 02:33:37.960528  994955 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0116 02:33:37.960540  994955 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0116 02:33:37.960550  994955 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0116 02:33:37.960559  994955 command_runner.go:130] > #   state.
	I0116 02:33:37.960570  994955 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0116 02:33:37.960584  994955 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0116 02:33:37.960598  994955 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0116 02:33:37.960613  994955 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0116 02:33:37.960626  994955 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0116 02:33:37.960640  994955 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0116 02:33:37.960651  994955 command_runner.go:130] > #   The currently recognized values are:
	I0116 02:33:37.960660  994955 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0116 02:33:37.960675  994955 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0116 02:33:37.960691  994955 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0116 02:33:37.960704  994955 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0116 02:33:37.960718  994955 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0116 02:33:37.960732  994955 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0116 02:33:37.960743  994955 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0116 02:33:37.960754  994955 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0116 02:33:37.960766  994955 command_runner.go:130] > #   should be moved to the container's cgroup
	I0116 02:33:37.960777  994955 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0116 02:33:37.960787  994955 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0116 02:33:37.960797  994955 command_runner.go:130] > runtime_type = "oci"
	I0116 02:33:37.960808  994955 command_runner.go:130] > runtime_root = "/run/runc"
	I0116 02:33:37.960819  994955 command_runner.go:130] > runtime_config_path = ""
	I0116 02:33:37.960828  994955 command_runner.go:130] > monitor_path = ""
	I0116 02:33:37.960835  994955 command_runner.go:130] > monitor_cgroup = ""
	I0116 02:33:37.960841  994955 command_runner.go:130] > monitor_exec_cgroup = ""
	I0116 02:33:37.960855  994955 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0116 02:33:37.960865  994955 command_runner.go:130] > # running containers
	I0116 02:33:37.960875  994955 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0116 02:33:37.960888  994955 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0116 02:33:37.960951  994955 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0116 02:33:37.960965  994955 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0116 02:33:37.960977  994955 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0116 02:33:37.960988  994955 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0116 02:33:37.960999  994955 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0116 02:33:37.961007  994955 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0116 02:33:37.961012  994955 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0116 02:33:37.961023  994955 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0116 02:33:37.961037  994955 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0116 02:33:37.961052  994955 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0116 02:33:37.961066  994955 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0116 02:33:37.961085  994955 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0116 02:33:37.961096  994955 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0116 02:33:37.961109  994955 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0116 02:33:37.961127  994955 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0116 02:33:37.961143  994955 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0116 02:33:37.961162  994955 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0116 02:33:37.961172  994955 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0116 02:33:37.961176  994955 command_runner.go:130] > # Example:
	I0116 02:33:37.961181  994955 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0116 02:33:37.961189  994955 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0116 02:33:37.961198  994955 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0116 02:33:37.961207  994955 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0116 02:33:37.961218  994955 command_runner.go:130] > # cpuset = 0
	I0116 02:33:37.961226  994955 command_runner.go:130] > # cpushares = "0-1"
	I0116 02:33:37.961236  994955 command_runner.go:130] > # Where:
	I0116 02:33:37.961244  994955 command_runner.go:130] > # The workload name is workload-type.
	I0116 02:33:37.961261  994955 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0116 02:33:37.961271  994955 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0116 02:33:37.961284  994955 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0116 02:33:37.961300  994955 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0116 02:33:37.961313  994955 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0116 02:33:37.961321  994955 command_runner.go:130] > # 
	I0116 02:33:37.961334  994955 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0116 02:33:37.961342  994955 command_runner.go:130] > #
	I0116 02:33:37.961349  994955 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0116 02:33:37.961361  994955 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0116 02:33:37.961376  994955 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0116 02:33:37.961390  994955 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0116 02:33:37.961403  994955 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0116 02:33:37.961412  994955 command_runner.go:130] > [crio.image]
	I0116 02:33:37.961425  994955 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0116 02:33:37.961433  994955 command_runner.go:130] > # default_transport = "docker://"
	I0116 02:33:37.961443  994955 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0116 02:33:37.961455  994955 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0116 02:33:37.961468  994955 command_runner.go:130] > # global_auth_file = ""
	I0116 02:33:37.961485  994955 command_runner.go:130] > # The image used to instantiate infra containers.
	I0116 02:33:37.961497  994955 command_runner.go:130] > # This option supports live configuration reload.
	I0116 02:33:37.961508  994955 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0116 02:33:37.961519  994955 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0116 02:33:37.961527  994955 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0116 02:33:37.961533  994955 command_runner.go:130] > # This option supports live configuration reload.
	I0116 02:33:37.961540  994955 command_runner.go:130] > # pause_image_auth_file = ""
	I0116 02:33:37.961546  994955 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0116 02:33:37.961559  994955 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0116 02:33:37.961573  994955 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0116 02:33:37.961583  994955 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0116 02:33:37.961597  994955 command_runner.go:130] > # pause_command = "/pause"
	I0116 02:33:37.961610  994955 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0116 02:33:37.961623  994955 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0116 02:33:37.961630  994955 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0116 02:33:37.961635  994955 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0116 02:33:37.961640  994955 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0116 02:33:37.961647  994955 command_runner.go:130] > # signature_policy = ""
	I0116 02:33:37.961653  994955 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0116 02:33:37.961658  994955 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0116 02:33:37.961662  994955 command_runner.go:130] > # changing them here.
	I0116 02:33:37.961666  994955 command_runner.go:130] > # insecure_registries = [
	I0116 02:33:37.961669  994955 command_runner.go:130] > # ]
	I0116 02:33:37.961678  994955 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0116 02:33:37.961682  994955 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0116 02:33:37.961686  994955 command_runner.go:130] > # image_volumes = "mkdir"
	I0116 02:33:37.961691  994955 command_runner.go:130] > # Temporary directory to use for storing big files
	I0116 02:33:37.961699  994955 command_runner.go:130] > # big_files_temporary_dir = ""
	I0116 02:33:37.961709  994955 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0116 02:33:37.961715  994955 command_runner.go:130] > # CNI plugins.
	I0116 02:33:37.961722  994955 command_runner.go:130] > [crio.network]
	I0116 02:33:37.961732  994955 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0116 02:33:37.961741  994955 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0116 02:33:37.961748  994955 command_runner.go:130] > # cni_default_network = ""
	I0116 02:33:37.961757  994955 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0116 02:33:37.961767  994955 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0116 02:33:37.961773  994955 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0116 02:33:37.961777  994955 command_runner.go:130] > # plugin_dirs = [
	I0116 02:33:37.961780  994955 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0116 02:33:37.961787  994955 command_runner.go:130] > # ]
	I0116 02:33:37.961792  994955 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0116 02:33:37.961796  994955 command_runner.go:130] > [crio.metrics]
	I0116 02:33:37.961813  994955 command_runner.go:130] > # Globally enable or disable metrics support.
	I0116 02:33:37.961821  994955 command_runner.go:130] > enable_metrics = true
	I0116 02:33:37.961829  994955 command_runner.go:130] > # Specify enabled metrics collectors.
	I0116 02:33:37.961837  994955 command_runner.go:130] > # Per default all metrics are enabled.
	I0116 02:33:37.961847  994955 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0116 02:33:37.961861  994955 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0116 02:33:37.961873  994955 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0116 02:33:37.961881  994955 command_runner.go:130] > # metrics_collectors = [
	I0116 02:33:37.961885  994955 command_runner.go:130] > # 	"operations",
	I0116 02:33:37.961892  994955 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0116 02:33:37.961897  994955 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0116 02:33:37.961904  994955 command_runner.go:130] > # 	"operations_errors",
	I0116 02:33:37.961911  994955 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0116 02:33:37.961916  994955 command_runner.go:130] > # 	"image_pulls_by_name",
	I0116 02:33:37.961924  994955 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0116 02:33:37.961932  994955 command_runner.go:130] > # 	"image_pulls_failures",
	I0116 02:33:37.961942  994955 command_runner.go:130] > # 	"image_pulls_successes",
	I0116 02:33:37.961953  994955 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0116 02:33:37.961963  994955 command_runner.go:130] > # 	"image_layer_reuse",
	I0116 02:33:37.961973  994955 command_runner.go:130] > # 	"containers_oom_total",
	I0116 02:33:37.961983  994955 command_runner.go:130] > # 	"containers_oom",
	I0116 02:33:37.961994  994955 command_runner.go:130] > # 	"processes_defunct",
	I0116 02:33:37.962000  994955 command_runner.go:130] > # 	"operations_total",
	I0116 02:33:37.962008  994955 command_runner.go:130] > # 	"operations_latency_seconds",
	I0116 02:33:37.962012  994955 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0116 02:33:37.962019  994955 command_runner.go:130] > # 	"operations_errors_total",
	I0116 02:33:37.962023  994955 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0116 02:33:37.962030  994955 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0116 02:33:37.962035  994955 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0116 02:33:37.962044  994955 command_runner.go:130] > # 	"image_pulls_success_total",
	I0116 02:33:37.962049  994955 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0116 02:33:37.962054  994955 command_runner.go:130] > # 	"containers_oom_count_total",
	I0116 02:33:37.962060  994955 command_runner.go:130] > # ]
	I0116 02:33:37.962065  994955 command_runner.go:130] > # The port on which the metrics server will listen.
	I0116 02:33:37.962071  994955 command_runner.go:130] > # metrics_port = 9090
	I0116 02:33:37.962076  994955 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0116 02:33:37.962083  994955 command_runner.go:130] > # metrics_socket = ""
	I0116 02:33:37.962088  994955 command_runner.go:130] > # The certificate for the secure metrics server.
	I0116 02:33:37.962094  994955 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0116 02:33:37.962102  994955 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0116 02:33:37.962108  994955 command_runner.go:130] > # certificate on any modification event.
	I0116 02:33:37.962112  994955 command_runner.go:130] > # metrics_cert = ""
	I0116 02:33:37.962120  994955 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0116 02:33:37.962124  994955 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0116 02:33:37.962128  994955 command_runner.go:130] > # metrics_key = ""
	I0116 02:33:37.962133  994955 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0116 02:33:37.962137  994955 command_runner.go:130] > [crio.tracing]
	I0116 02:33:37.962144  994955 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0116 02:33:37.962148  994955 command_runner.go:130] > # enable_tracing = false
	I0116 02:33:37.962153  994955 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0116 02:33:37.962158  994955 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0116 02:33:37.962163  994955 command_runner.go:130] > # Number of samples to collect per million spans.
	I0116 02:33:37.962169  994955 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0116 02:33:37.962175  994955 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0116 02:33:37.962181  994955 command_runner.go:130] > [crio.stats]
	I0116 02:33:37.962187  994955 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0116 02:33:37.962194  994955 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0116 02:33:37.962198  994955 command_runner.go:130] > # stats_collection_period = 0
	I0116 02:33:37.962349  994955 cni.go:84] Creating CNI manager for ""
	I0116 02:33:37.962367  994955 cni.go:136] 3 nodes found, recommending kindnet
	I0116 02:33:37.962388  994955 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0116 02:33:37.962407  994955 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.50 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-835787 NodeName:multinode-835787 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.50"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.50 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0116 02:33:37.962562  994955 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.50
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-835787"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.50
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.50"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0116 02:33:37.962655  994955 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-835787 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.50
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-835787 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0116 02:33:37.962715  994955 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0116 02:33:37.973302  994955 command_runner.go:130] > kubeadm
	I0116 02:33:37.973329  994955 command_runner.go:130] > kubectl
	I0116 02:33:37.973336  994955 command_runner.go:130] > kubelet
	I0116 02:33:37.973379  994955 binaries.go:44] Found k8s binaries, skipping transfer
	I0116 02:33:37.973443  994955 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0116 02:33:37.983685  994955 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I0116 02:33:38.001354  994955 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0116 02:33:38.018492  994955 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2100 bytes)
	I0116 02:33:38.036344  994955 ssh_runner.go:195] Run: grep 192.168.39.50	control-plane.minikube.internal$ /etc/hosts
	I0116 02:33:38.040404  994955 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.50	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 02:33:38.052558  994955 certs.go:56] Setting up /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/multinode-835787 for IP: 192.168.39.50
	I0116 02:33:38.052596  994955 certs.go:190] acquiring lock for shared ca certs: {Name:mk7c5920652340a030ac1e36e0fe21cc8437c491 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:33:38.052784  994955 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17967-971255/.minikube/ca.key
	I0116 02:33:38.052855  994955 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17967-971255/.minikube/proxy-client-ca.key
	I0116 02:33:38.053050  994955 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/multinode-835787/client.key
	I0116 02:33:38.053141  994955 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/multinode-835787/apiserver.key.59dcb911
	I0116 02:33:38.053214  994955 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/multinode-835787/proxy-client.key
	I0116 02:33:38.053263  994955 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/multinode-835787/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0116 02:33:38.053286  994955 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/multinode-835787/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0116 02:33:38.053301  994955 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/multinode-835787/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0116 02:33:38.053322  994955 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/multinode-835787/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0116 02:33:38.053341  994955 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-971255/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0116 02:33:38.053358  994955 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-971255/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0116 02:33:38.053374  994955 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-971255/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0116 02:33:38.053389  994955 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-971255/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0116 02:33:38.053472  994955 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/home/jenkins/minikube-integration/17967-971255/.minikube/certs/978482.pem (1338 bytes)
	W0116 02:33:38.053509  994955 certs.go:433] ignoring /home/jenkins/minikube-integration/17967-971255/.minikube/certs/home/jenkins/minikube-integration/17967-971255/.minikube/certs/978482_empty.pem, impossibly tiny 0 bytes
	I0116 02:33:38.053525  994955 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca-key.pem (1675 bytes)
	I0116 02:33:38.053561  994955 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca.pem (1082 bytes)
	I0116 02:33:38.053599  994955 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/home/jenkins/minikube-integration/17967-971255/.minikube/certs/cert.pem (1123 bytes)
	I0116 02:33:38.053632  994955 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/home/jenkins/minikube-integration/17967-971255/.minikube/certs/key.pem (1675 bytes)
	I0116 02:33:38.053695  994955 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-971255/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17967-971255/.minikube/files/etc/ssl/certs/9784822.pem (1708 bytes)
	I0116 02:33:38.053735  994955 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/978482.pem -> /usr/share/ca-certificates/978482.pem
	I0116 02:33:38.053752  994955 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-971255/.minikube/files/etc/ssl/certs/9784822.pem -> /usr/share/ca-certificates/9784822.pem
	I0116 02:33:38.053770  994955 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-971255/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0116 02:33:38.054790  994955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/multinode-835787/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0116 02:33:38.078799  994955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/multinode-835787/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0116 02:33:38.103661  994955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/multinode-835787/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0116 02:33:38.128815  994955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/multinode-835787/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0116 02:33:38.153389  994955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0116 02:33:38.178561  994955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0116 02:33:38.204367  994955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0116 02:33:38.230517  994955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0116 02:33:38.256822  994955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/certs/978482.pem --> /usr/share/ca-certificates/978482.pem (1338 bytes)
	I0116 02:33:38.281643  994955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/files/etc/ssl/certs/9784822.pem --> /usr/share/ca-certificates/9784822.pem (1708 bytes)
	I0116 02:33:38.306778  994955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0116 02:33:38.330840  994955 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0116 02:33:38.348580  994955 ssh_runner.go:195] Run: openssl version
	I0116 02:33:38.354300  994955 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0116 02:33:38.354554  994955 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/978482.pem && ln -fs /usr/share/ca-certificates/978482.pem /etc/ssl/certs/978482.pem"
	I0116 02:33:38.366149  994955 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/978482.pem
	I0116 02:33:38.371239  994955 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jan 16 02:10 /usr/share/ca-certificates/978482.pem
	I0116 02:33:38.371550  994955 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 16 02:10 /usr/share/ca-certificates/978482.pem
	I0116 02:33:38.371625  994955 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/978482.pem
	I0116 02:33:38.377449  994955 command_runner.go:130] > 51391683
	I0116 02:33:38.377565  994955 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/978482.pem /etc/ssl/certs/51391683.0"
	I0116 02:33:38.390380  994955 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9784822.pem && ln -fs /usr/share/ca-certificates/9784822.pem /etc/ssl/certs/9784822.pem"
	I0116 02:33:38.403582  994955 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9784822.pem
	I0116 02:33:38.408755  994955 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jan 16 02:10 /usr/share/ca-certificates/9784822.pem
	I0116 02:33:38.408800  994955 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 16 02:10 /usr/share/ca-certificates/9784822.pem
	I0116 02:33:38.408855  994955 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9784822.pem
	I0116 02:33:38.415269  994955 command_runner.go:130] > 3ec20f2e
	I0116 02:33:38.415369  994955 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9784822.pem /etc/ssl/certs/3ec20f2e.0"
	I0116 02:33:38.428145  994955 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0116 02:33:38.439726  994955 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0116 02:33:38.444950  994955 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jan 16 02:01 /usr/share/ca-certificates/minikubeCA.pem
	I0116 02:33:38.444992  994955 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 16 02:01 /usr/share/ca-certificates/minikubeCA.pem
	I0116 02:33:38.445080  994955 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0116 02:33:38.450659  994955 command_runner.go:130] > b5213941
	I0116 02:33:38.450877  994955 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0116 02:33:38.463002  994955 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0116 02:33:38.468047  994955 command_runner.go:130] > ca.crt
	I0116 02:33:38.468069  994955 command_runner.go:130] > ca.key
	I0116 02:33:38.468074  994955 command_runner.go:130] > healthcheck-client.crt
	I0116 02:33:38.468079  994955 command_runner.go:130] > healthcheck-client.key
	I0116 02:33:38.468084  994955 command_runner.go:130] > peer.crt
	I0116 02:33:38.468095  994955 command_runner.go:130] > peer.key
	I0116 02:33:38.468099  994955 command_runner.go:130] > server.crt
	I0116 02:33:38.468102  994955 command_runner.go:130] > server.key
	I0116 02:33:38.468156  994955 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0116 02:33:38.474620  994955 command_runner.go:130] > Certificate will not expire
	I0116 02:33:38.474708  994955 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0116 02:33:38.480563  994955 command_runner.go:130] > Certificate will not expire
	I0116 02:33:38.480874  994955 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0116 02:33:38.486593  994955 command_runner.go:130] > Certificate will not expire
	I0116 02:33:38.487015  994955 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0116 02:33:38.492978  994955 command_runner.go:130] > Certificate will not expire
	I0116 02:33:38.493077  994955 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0116 02:33:38.499066  994955 command_runner.go:130] > Certificate will not expire
	I0116 02:33:38.499456  994955 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0116 02:33:38.505640  994955 command_runner.go:130] > Certificate will not expire
	I0116 02:33:38.506136  994955 kubeadm.go:404] StartCluster: {Name:multinode-835787 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.28.4 ClusterName:multinode-835787 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.50 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.15 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.123 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 02:33:38.506334  994955 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0116 02:33:38.506421  994955 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 02:33:38.546739  994955 cri.go:89] found id: ""
	I0116 02:33:38.546831  994955 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0116 02:33:38.557374  994955 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0116 02:33:38.557404  994955 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0116 02:33:38.557414  994955 command_runner.go:130] > /var/lib/minikube/etcd:
	I0116 02:33:38.557420  994955 command_runner.go:130] > member
	I0116 02:33:38.557449  994955 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0116 02:33:38.557458  994955 kubeadm.go:636] restartCluster start
	I0116 02:33:38.557516  994955 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0116 02:33:38.567526  994955 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0116 02:33:38.568239  994955 kubeconfig.go:92] found "multinode-835787" server: "https://192.168.39.50:8443"
	I0116 02:33:38.568709  994955 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17967-971255/kubeconfig
	I0116 02:33:38.569021  994955 kapi.go:59] client config for multinode-835787: &rest.Config{Host:"https://192.168.39.50:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17967-971255/.minikube/profiles/multinode-835787/client.crt", KeyFile:"/home/jenkins/minikube-integration/17967-971255/.minikube/profiles/multinode-835787/client.key", CAFile:"/home/jenkins/minikube-integration/17967-971255/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), N
extProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19be0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0116 02:33:38.569679  994955 cert_rotation.go:137] Starting client certificate rotation controller
	I0116 02:33:38.569906  994955 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0116 02:33:38.579818  994955 api_server.go:166] Checking apiserver status ...
	I0116 02:33:38.579892  994955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 02:33:38.591551  994955 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 02:33:39.080193  994955 api_server.go:166] Checking apiserver status ...
	I0116 02:33:39.080344  994955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 02:33:39.093171  994955 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 02:33:39.579915  994955 api_server.go:166] Checking apiserver status ...
	I0116 02:33:39.580011  994955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 02:33:39.592110  994955 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 02:33:40.080647  994955 api_server.go:166] Checking apiserver status ...
	I0116 02:33:40.080731  994955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 02:33:40.093101  994955 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 02:33:40.580676  994955 api_server.go:166] Checking apiserver status ...
	I0116 02:33:40.580785  994955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 02:33:40.593266  994955 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 02:33:41.080893  994955 api_server.go:166] Checking apiserver status ...
	I0116 02:33:41.081007  994955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 02:33:41.093310  994955 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 02:33:41.580844  994955 api_server.go:166] Checking apiserver status ...
	I0116 02:33:41.580955  994955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 02:33:41.593024  994955 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 02:33:42.080651  994955 api_server.go:166] Checking apiserver status ...
	I0116 02:33:42.080754  994955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 02:33:42.094286  994955 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 02:33:42.580395  994955 api_server.go:166] Checking apiserver status ...
	I0116 02:33:42.580504  994955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 02:33:42.592501  994955 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 02:33:43.080012  994955 api_server.go:166] Checking apiserver status ...
	I0116 02:33:43.080175  994955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 02:33:43.093157  994955 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 02:33:43.580823  994955 api_server.go:166] Checking apiserver status ...
	I0116 02:33:43.580927  994955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 02:33:43.593937  994955 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 02:33:44.080579  994955 api_server.go:166] Checking apiserver status ...
	I0116 02:33:44.080699  994955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 02:33:44.094995  994955 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 02:33:44.580053  994955 api_server.go:166] Checking apiserver status ...
	I0116 02:33:44.580161  994955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 02:33:44.593641  994955 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 02:33:45.080125  994955 api_server.go:166] Checking apiserver status ...
	I0116 02:33:45.080261  994955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 02:33:45.094246  994955 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 02:33:45.580861  994955 api_server.go:166] Checking apiserver status ...
	I0116 02:33:45.580969  994955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 02:33:45.594447  994955 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 02:33:46.079936  994955 api_server.go:166] Checking apiserver status ...
	I0116 02:33:46.080032  994955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 02:33:46.093123  994955 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 02:33:46.580758  994955 api_server.go:166] Checking apiserver status ...
	I0116 02:33:46.580859  994955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 02:33:46.592809  994955 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 02:33:47.080341  994955 api_server.go:166] Checking apiserver status ...
	I0116 02:33:47.080482  994955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 02:33:47.092603  994955 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 02:33:47.580762  994955 api_server.go:166] Checking apiserver status ...
	I0116 02:33:47.580893  994955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 02:33:47.593361  994955 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 02:33:48.079926  994955 api_server.go:166] Checking apiserver status ...
	I0116 02:33:48.080044  994955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 02:33:48.092448  994955 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 02:33:48.580280  994955 api_server.go:166] Checking apiserver status ...
	I0116 02:33:48.580360  994955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 02:33:48.592604  994955 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 02:33:48.592642  994955 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0116 02:33:48.592665  994955 kubeadm.go:1135] stopping kube-system containers ...
	I0116 02:33:48.592686  994955 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0116 02:33:48.592763  994955 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 02:33:48.637169  994955 cri.go:89] found id: ""
	I0116 02:33:48.637255  994955 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0116 02:33:48.654095  994955 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0116 02:33:48.663470  994955 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0116 02:33:48.663504  994955 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0116 02:33:48.663517  994955 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0116 02:33:48.663525  994955 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0116 02:33:48.663672  994955 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0116 02:33:48.663748  994955 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0116 02:33:48.673352  994955 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0116 02:33:48.673377  994955 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 02:33:48.800465  994955 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0116 02:33:48.801142  994955 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0116 02:33:48.801823  994955 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0116 02:33:48.803061  994955 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0116 02:33:48.803570  994955 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0116 02:33:48.804113  994955 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0116 02:33:48.804954  994955 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0116 02:33:48.805425  994955 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0116 02:33:48.805892  994955 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0116 02:33:48.806483  994955 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0116 02:33:48.806971  994955 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0116 02:33:48.807482  994955 command_runner.go:130] > [certs] Using the existing "sa" key
	I0116 02:33:48.808992  994955 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 02:33:49.661209  994955 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0116 02:33:49.661241  994955 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0116 02:33:49.661251  994955 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0116 02:33:49.661259  994955 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0116 02:33:49.661267  994955 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0116 02:33:49.661457  994955 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0116 02:33:49.857794  994955 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0116 02:33:49.857833  994955 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0116 02:33:49.857839  994955 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0116 02:33:49.857864  994955 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 02:33:49.933140  994955 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0116 02:33:49.933172  994955 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0116 02:33:49.942491  994955 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0116 02:33:49.944569  994955 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0116 02:33:49.951664  994955 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0116 02:33:50.052497  994955 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0116 02:33:50.052547  994955 api_server.go:52] waiting for apiserver process to appear ...
	I0116 02:33:50.052649  994955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 02:33:50.553512  994955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 02:33:51.053759  994955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 02:33:51.553421  994955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 02:33:52.052911  994955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 02:33:52.552943  994955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 02:33:52.575049  994955 command_runner.go:130] > 1089
	I0116 02:33:52.576500  994955 api_server.go:72] duration metric: took 2.523946884s to wait for apiserver process to appear ...
	I0116 02:33:52.576524  994955 api_server.go:88] waiting for apiserver healthz status ...
	I0116 02:33:52.576544  994955 api_server.go:253] Checking apiserver healthz at https://192.168.39.50:8443/healthz ...
	I0116 02:33:56.563642  994955 api_server.go:279] https://192.168.39.50:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0116 02:33:56.563681  994955 api_server.go:103] status: https://192.168.39.50:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0116 02:33:56.563706  994955 api_server.go:253] Checking apiserver healthz at https://192.168.39.50:8443/healthz ...
	I0116 02:33:56.654757  994955 api_server.go:279] https://192.168.39.50:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0116 02:33:56.654802  994955 api_server.go:103] status: https://192.168.39.50:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0116 02:33:56.654817  994955 api_server.go:253] Checking apiserver healthz at https://192.168.39.50:8443/healthz ...
	I0116 02:33:56.682134  994955 api_server.go:279] https://192.168.39.50:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0116 02:33:56.682182  994955 api_server.go:103] status: https://192.168.39.50:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0116 02:33:57.076620  994955 api_server.go:253] Checking apiserver healthz at https://192.168.39.50:8443/healthz ...
	I0116 02:33:57.082110  994955 api_server.go:279] https://192.168.39.50:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0116 02:33:57.082151  994955 api_server.go:103] status: https://192.168.39.50:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0116 02:33:57.577311  994955 api_server.go:253] Checking apiserver healthz at https://192.168.39.50:8443/healthz ...
	I0116 02:33:57.583399  994955 api_server.go:279] https://192.168.39.50:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0116 02:33:57.583437  994955 api_server.go:103] status: https://192.168.39.50:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0116 02:33:58.077027  994955 api_server.go:253] Checking apiserver healthz at https://192.168.39.50:8443/healthz ...
	I0116 02:33:58.084843  994955 api_server.go:279] https://192.168.39.50:8443/healthz returned 200:
	ok
	I0116 02:33:58.085017  994955 round_trippers.go:463] GET https://192.168.39.50:8443/version
	I0116 02:33:58.085032  994955 round_trippers.go:469] Request Headers:
	I0116 02:33:58.085069  994955 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:33:58.085082  994955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:33:58.094964  994955 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0116 02:33:58.095003  994955 round_trippers.go:577] Response Headers:
	I0116 02:33:58.095016  994955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:33:58.095024  994955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:33:58.095034  994955 round_trippers.go:580]     Content-Length: 264
	I0116 02:33:58.095043  994955 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:33:58 GMT
	I0116 02:33:58.095059  994955 round_trippers.go:580]     Audit-Id: d39c4861-8779-4198-8f76-93e265ebf740
	I0116 02:33:58.095067  994955 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:33:58.095080  994955 round_trippers.go:580]     Content-Type: application/json
	I0116 02:33:58.095122  994955 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0116 02:33:58.095231  994955 api_server.go:141] control plane version: v1.28.4
	I0116 02:33:58.095256  994955 api_server.go:131] duration metric: took 5.518724644s to wait for apiserver health ...
	I0116 02:33:58.095267  994955 cni.go:84] Creating CNI manager for ""
	I0116 02:33:58.095276  994955 cni.go:136] 3 nodes found, recommending kindnet
	I0116 02:33:58.096824  994955 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0116 02:33:58.098347  994955 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0116 02:33:58.111582  994955 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0116 02:33:58.111627  994955 command_runner.go:130] >   Size: 2685752   	Blocks: 5248       IO Block: 4096   regular file
	I0116 02:33:58.111638  994955 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I0116 02:33:58.111648  994955 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0116 02:33:58.111658  994955 command_runner.go:130] > Access: 2024-01-16 02:33:25.428611593 +0000
	I0116 02:33:58.111666  994955 command_runner.go:130] > Modify: 2023-12-28 22:53:36.000000000 +0000
	I0116 02:33:58.111674  994955 command_runner.go:130] > Change: 2024-01-16 02:33:23.419611593 +0000
	I0116 02:33:58.111679  994955 command_runner.go:130] >  Birth: -
	I0116 02:33:58.111917  994955 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0116 02:33:58.111939  994955 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0116 02:33:58.139106  994955 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0116 02:33:59.359323  994955 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0116 02:33:59.365197  994955 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0116 02:33:59.371874  994955 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0116 02:33:59.386983  994955 command_runner.go:130] > daemonset.apps/kindnet configured
	I0116 02:33:59.389559  994955 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.250409641s)
	I0116 02:33:59.389601  994955 system_pods.go:43] waiting for kube-system pods to appear ...
	I0116 02:33:59.389746  994955 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods
	I0116 02:33:59.389757  994955 round_trippers.go:469] Request Headers:
	I0116 02:33:59.389765  994955 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:33:59.389771  994955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:33:59.394001  994955 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0116 02:33:59.394037  994955 round_trippers.go:577] Response Headers:
	I0116 02:33:59.394048  994955 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:33:59 GMT
	I0116 02:33:59.394058  994955 round_trippers.go:580]     Audit-Id: 626672d9-7e2a-4ac8-a835-211e0b25fa75
	I0116 02:33:59.394067  994955 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:33:59.394074  994955 round_trippers.go:580]     Content-Type: application/json
	I0116 02:33:59.394086  994955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:33:59.394095  994955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:33:59.396118  994955 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"825"},"items":[{"metadata":{"name":"coredns-5dd5756b68-965sn","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"a0898f09-1a64-4beb-bfbf-de15f2e07038","resourceVersion":"806","creationTimestamp":"2024-01-16T02:23:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a765b12d-1df2-4d20-a0f7-7471371f2fe7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:23:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a765b12d-1df2-4d20-a0f7-7471371f2fe7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 83229 chars]
	I0116 02:33:59.400272  994955 system_pods.go:59] 12 kube-system pods found
	I0116 02:33:59.400309  994955 system_pods.go:61] "coredns-5dd5756b68-965sn" [a0898f09-1a64-4beb-bfbf-de15f2e07038] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0116 02:33:59.400321  994955 system_pods.go:61] "etcd-multinode-835787" [ccb51de1-d565-42b0-bd30-9b1acb1c725d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0116 02:33:59.400328  994955 system_pods.go:61] "kindnet-755b9" [ee1ea8c4-abfe-4fea-9f71-32840f6790ed] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0116 02:33:59.400337  994955 system_pods.go:61] "kindnet-hrsvh" [7ff7f33b-72a7-47b1-b4a9-bbbdad91e0d9] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0116 02:33:59.400346  994955 system_pods.go:61] "kindnet-nllfm" [faff798d-63d5-440d-a8f5-1f8d52ab7282] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0116 02:33:59.400357  994955 system_pods.go:61] "kube-apiserver-multinode-835787" [9c26db11-7208-4540-8a73-407a6edd3a0b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0116 02:33:59.400368  994955 system_pods.go:61] "kube-controller-manager-multinode-835787" [daf9e312-54ad-4a4e-b334-9b84e55f8fef] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0116 02:33:59.400378  994955 system_pods.go:61] "kube-proxy-fpdqr" [42b74cbd-93d8-4ac7-9071-112d5e7c572b] Running
	I0116 02:33:59.400384  994955 system_pods.go:61] "kube-proxy-gbvc2" [74d63696-cb46-484d-937b-8883e6f1df06] Running
	I0116 02:33:59.400391  994955 system_pods.go:61] "kube-proxy-hxx8p" [9c35aa68-14ac-41e1-81f8-8fdb0c48d9f1] Running
	I0116 02:33:59.400397  994955 system_pods.go:61] "kube-scheduler-multinode-835787" [7b9c28cc-6e78-413a-af72-511714d2462e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0116 02:33:59.400404  994955 system_pods.go:61] "storage-provisioner" [2d18fde8-ca44-4257-8475-100cd8b34ef8] Running
	I0116 02:33:59.400412  994955 system_pods.go:74] duration metric: took 10.799853ms to wait for pod list to return data ...
	I0116 02:33:59.400421  994955 node_conditions.go:102] verifying NodePressure condition ...
	I0116 02:33:59.400496  994955 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes
	I0116 02:33:59.400505  994955 round_trippers.go:469] Request Headers:
	I0116 02:33:59.400512  994955 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:33:59.400519  994955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:33:59.403823  994955 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 02:33:59.403851  994955 round_trippers.go:577] Response Headers:
	I0116 02:33:59.403858  994955 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:33:59 GMT
	I0116 02:33:59.403863  994955 round_trippers.go:580]     Audit-Id: 04b530f3-eef2-44da-a89b-c8fc9774d6f6
	I0116 02:33:59.403871  994955 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:33:59.403882  994955 round_trippers.go:580]     Content-Type: application/json
	I0116 02:33:59.403894  994955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:33:59.403903  994955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:33:59.404369  994955 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"825"},"items":[{"metadata":{"name":"multinode-835787","uid":"7ae74749-584e-4cbc-92c4-0e5f2539761e","resourceVersion":"759","creationTimestamp":"2024-01-16T02:23:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-835787","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-835787","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_23_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 16474 chars]
	I0116 02:33:59.405248  994955 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 02:33:59.405279  994955 node_conditions.go:123] node cpu capacity is 2
	I0116 02:33:59.405290  994955 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 02:33:59.405296  994955 node_conditions.go:123] node cpu capacity is 2
	I0116 02:33:59.405302  994955 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 02:33:59.405311  994955 node_conditions.go:123] node cpu capacity is 2
	I0116 02:33:59.405317  994955 node_conditions.go:105] duration metric: took 4.889607ms to run NodePressure ...
	I0116 02:33:59.405343  994955 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 02:33:59.679761  994955 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0116 02:33:59.679793  994955 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0116 02:33:59.679827  994955 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0116 02:33:59.679940  994955 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%!D(MISSING)control-plane
	I0116 02:33:59.679953  994955 round_trippers.go:469] Request Headers:
	I0116 02:33:59.679965  994955 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:33:59.679973  994955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:33:59.683319  994955 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 02:33:59.683349  994955 round_trippers.go:577] Response Headers:
	I0116 02:33:59.683357  994955 round_trippers.go:580]     Audit-Id: 617ed906-9c8d-419c-aada-953cae2a68b7
	I0116 02:33:59.683362  994955 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:33:59.683370  994955 round_trippers.go:580]     Content-Type: application/json
	I0116 02:33:59.683379  994955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:33:59.683388  994955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:33:59.683398  994955 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:33:59 GMT
	I0116 02:33:59.683917  994955 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"846"},"items":[{"metadata":{"name":"etcd-multinode-835787","namespace":"kube-system","uid":"ccb51de1-d565-42b0-bd30-9b1acb1c725d","resourceVersion":"802","creationTimestamp":"2024-01-16T02:23:33Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.50:2379","kubernetes.io/config.hash":"108085f55363e386b9f9c083ac579444","kubernetes.io/config.mirror":"108085f55363e386b9f9c083ac579444","kubernetes.io/config.seen":"2024-01-16T02:23:33.032941198Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-835787","uid":"7ae74749-584e-4cbc-92c4-0e5f2539761e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:23:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations
":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f:ku [truncated 28859 chars]
	I0116 02:33:59.685459  994955 kubeadm.go:787] kubelet initialised
	I0116 02:33:59.685485  994955 kubeadm.go:788] duration metric: took 5.641708ms waiting for restarted kubelet to initialise ...
	I0116 02:33:59.685500  994955 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 02:33:59.685583  994955 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods
	I0116 02:33:59.685595  994955 round_trippers.go:469] Request Headers:
	I0116 02:33:59.685607  994955 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:33:59.685618  994955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:33:59.689110  994955 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 02:33:59.689137  994955 round_trippers.go:577] Response Headers:
	I0116 02:33:59.689147  994955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:33:59.689156  994955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:33:59.689164  994955 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:33:59 GMT
	I0116 02:33:59.689173  994955 round_trippers.go:580]     Audit-Id: 36824c5b-bef9-48b1-aca6-07f260c5dacc
	I0116 02:33:59.689180  994955 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:33:59.689188  994955 round_trippers.go:580]     Content-Type: application/json
	I0116 02:33:59.690450  994955 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"846"},"items":[{"metadata":{"name":"coredns-5dd5756b68-965sn","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"a0898f09-1a64-4beb-bfbf-de15f2e07038","resourceVersion":"806","creationTimestamp":"2024-01-16T02:23:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a765b12d-1df2-4d20-a0f7-7471371f2fe7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:23:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a765b12d-1df2-4d20-a0f7-7471371f2fe7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 83229 chars]
	I0116 02:33:59.693207  994955 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-965sn" in "kube-system" namespace to be "Ready" ...
	I0116 02:33:59.693348  994955 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-965sn
	I0116 02:33:59.693366  994955 round_trippers.go:469] Request Headers:
	I0116 02:33:59.693378  994955 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:33:59.693389  994955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:33:59.696138  994955 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:33:59.696162  994955 round_trippers.go:577] Response Headers:
	I0116 02:33:59.696172  994955 round_trippers.go:580]     Audit-Id: d25e8bd0-e0bc-4cfe-aefc-1bf7584de4d6
	I0116 02:33:59.696181  994955 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:33:59.696190  994955 round_trippers.go:580]     Content-Type: application/json
	I0116 02:33:59.696197  994955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:33:59.696204  994955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:33:59.696223  994955 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:33:59 GMT
	I0116 02:33:59.696387  994955 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-965sn","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"a0898f09-1a64-4beb-bfbf-de15f2e07038","resourceVersion":"806","creationTimestamp":"2024-01-16T02:23:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a765b12d-1df2-4d20-a0f7-7471371f2fe7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:23:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a765b12d-1df2-4d20-a0f7-7471371f2fe7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I0116 02:33:59.697001  994955 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-835787
	I0116 02:33:59.697021  994955 round_trippers.go:469] Request Headers:
	I0116 02:33:59.697034  994955 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:33:59.697044  994955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:33:59.699338  994955 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:33:59.699360  994955 round_trippers.go:577] Response Headers:
	I0116 02:33:59.699370  994955 round_trippers.go:580]     Audit-Id: 17a99d39-9672-40f8-8f80-2452a683357c
	I0116 02:33:59.699377  994955 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:33:59.699383  994955 round_trippers.go:580]     Content-Type: application/json
	I0116 02:33:59.699388  994955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:33:59.699393  994955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:33:59.699398  994955 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:33:59 GMT
	I0116 02:33:59.699604  994955 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-835787","uid":"7ae74749-584e-4cbc-92c4-0e5f2539761e","resourceVersion":"759","creationTimestamp":"2024-01-16T02:23:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-835787","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-835787","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_23_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:23:29Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I0116 02:33:59.700032  994955 pod_ready.go:97] node "multinode-835787" hosting pod "coredns-5dd5756b68-965sn" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-835787" has status "Ready":"False"
	I0116 02:33:59.700063  994955 pod_ready.go:81] duration metric: took 6.827226ms waiting for pod "coredns-5dd5756b68-965sn" in "kube-system" namespace to be "Ready" ...
	E0116 02:33:59.700077  994955 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-835787" hosting pod "coredns-5dd5756b68-965sn" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-835787" has status "Ready":"False"
	I0116 02:33:59.700094  994955 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-835787" in "kube-system" namespace to be "Ready" ...
	I0116 02:33:59.700191  994955 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-835787
	I0116 02:33:59.700202  994955 round_trippers.go:469] Request Headers:
	I0116 02:33:59.700220  994955 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:33:59.700234  994955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:33:59.702306  994955 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:33:59.702329  994955 round_trippers.go:577] Response Headers:
	I0116 02:33:59.702339  994955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:33:59.702347  994955 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:33:59 GMT
	I0116 02:33:59.702356  994955 round_trippers.go:580]     Audit-Id: 7c3ed899-72de-4537-b559-7309fb66e559
	I0116 02:33:59.702365  994955 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:33:59.702373  994955 round_trippers.go:580]     Content-Type: application/json
	I0116 02:33:59.702382  994955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:33:59.702546  994955 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-835787","namespace":"kube-system","uid":"ccb51de1-d565-42b0-bd30-9b1acb1c725d","resourceVersion":"802","creationTimestamp":"2024-01-16T02:23:33Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.50:2379","kubernetes.io/config.hash":"108085f55363e386b9f9c083ac579444","kubernetes.io/config.mirror":"108085f55363e386b9f9c083ac579444","kubernetes.io/config.seen":"2024-01-16T02:23:33.032941198Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-835787","uid":"7ae74749-584e-4cbc-92c4-0e5f2539761e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:23:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6067 chars]
	I0116 02:33:59.703030  994955 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-835787
	I0116 02:33:59.703047  994955 round_trippers.go:469] Request Headers:
	I0116 02:33:59.703055  994955 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:33:59.703069  994955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:33:59.704975  994955 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0116 02:33:59.704990  994955 round_trippers.go:577] Response Headers:
	I0116 02:33:59.704996  994955 round_trippers.go:580]     Audit-Id: 3e94f8c9-61db-4838-b80a-1683bc653b27
	I0116 02:33:59.705004  994955 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:33:59.705009  994955 round_trippers.go:580]     Content-Type: application/json
	I0116 02:33:59.705015  994955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:33:59.705022  994955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:33:59.705030  994955 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:33:59 GMT
	I0116 02:33:59.705170  994955 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-835787","uid":"7ae74749-584e-4cbc-92c4-0e5f2539761e","resourceVersion":"759","creationTimestamp":"2024-01-16T02:23:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-835787","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-835787","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_23_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:23:29Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I0116 02:33:59.705494  994955 pod_ready.go:97] node "multinode-835787" hosting pod "etcd-multinode-835787" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-835787" has status "Ready":"False"
	I0116 02:33:59.705515  994955 pod_ready.go:81] duration metric: took 5.413832ms waiting for pod "etcd-multinode-835787" in "kube-system" namespace to be "Ready" ...
	E0116 02:33:59.705523  994955 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-835787" hosting pod "etcd-multinode-835787" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-835787" has status "Ready":"False"
	I0116 02:33:59.705538  994955 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-835787" in "kube-system" namespace to be "Ready" ...
	I0116 02:33:59.705592  994955 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-835787
	I0116 02:33:59.705598  994955 round_trippers.go:469] Request Headers:
	I0116 02:33:59.705605  994955 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:33:59.705613  994955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:33:59.707582  994955 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0116 02:33:59.707600  994955 round_trippers.go:577] Response Headers:
	I0116 02:33:59.707609  994955 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:33:59 GMT
	I0116 02:33:59.707617  994955 round_trippers.go:580]     Audit-Id: 9a3df086-4a62-4916-8e12-6a12a93da6e0
	I0116 02:33:59.707637  994955 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:33:59.707650  994955 round_trippers.go:580]     Content-Type: application/json
	I0116 02:33:59.707660  994955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:33:59.707672  994955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:33:59.707851  994955 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-835787","namespace":"kube-system","uid":"9c26db11-7208-4540-8a73-407a6edd3a0b","resourceVersion":"799","creationTimestamp":"2024-01-16T02:23:33Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.50:8443","kubernetes.io/config.hash":"b27880b6b81ca11dc023b4901941ff6f","kubernetes.io/config.mirror":"b27880b6b81ca11dc023b4901941ff6f","kubernetes.io/config.seen":"2024-01-16T02:23:33.032945135Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-835787","uid":"7ae74749-584e-4cbc-92c4-0e5f2539761e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:23:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7624 chars]
	I0116 02:33:59.708270  994955 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-835787
	I0116 02:33:59.708283  994955 round_trippers.go:469] Request Headers:
	I0116 02:33:59.708290  994955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:33:59.708301  994955 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:33:59.710171  994955 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0116 02:33:59.710186  994955 round_trippers.go:577] Response Headers:
	I0116 02:33:59.710196  994955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:33:59.710204  994955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:33:59.710211  994955 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:33:59 GMT
	I0116 02:33:59.710220  994955 round_trippers.go:580]     Audit-Id: 370b874d-c869-47b4-8515-944380588a8c
	I0116 02:33:59.710230  994955 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:33:59.710242  994955 round_trippers.go:580]     Content-Type: application/json
	I0116 02:33:59.710376  994955 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-835787","uid":"7ae74749-584e-4cbc-92c4-0e5f2539761e","resourceVersion":"759","creationTimestamp":"2024-01-16T02:23:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-835787","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-835787","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_23_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:23:29Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I0116 02:33:59.710765  994955 pod_ready.go:97] node "multinode-835787" hosting pod "kube-apiserver-multinode-835787" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-835787" has status "Ready":"False"
	I0116 02:33:59.710788  994955 pod_ready.go:81] duration metric: took 5.240699ms waiting for pod "kube-apiserver-multinode-835787" in "kube-system" namespace to be "Ready" ...
	E0116 02:33:59.710801  994955 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-835787" hosting pod "kube-apiserver-multinode-835787" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-835787" has status "Ready":"False"
	I0116 02:33:59.710815  994955 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-835787" in "kube-system" namespace to be "Ready" ...
	I0116 02:33:59.710886  994955 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-835787
	I0116 02:33:59.710896  994955 round_trippers.go:469] Request Headers:
	I0116 02:33:59.710906  994955 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:33:59.710919  994955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:33:59.712729  994955 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0116 02:33:59.712749  994955 round_trippers.go:577] Response Headers:
	I0116 02:33:59.712758  994955 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:33:59.712767  994955 round_trippers.go:580]     Content-Type: application/json
	I0116 02:33:59.712775  994955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:33:59.712786  994955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:33:59.712798  994955 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:33:59 GMT
	I0116 02:33:59.712809  994955 round_trippers.go:580]     Audit-Id: 1f586951-f9fd-49a5-a8ae-c2a00600eb1a
	I0116 02:33:59.712995  994955 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-835787","namespace":"kube-system","uid":"daf9e312-54ad-4a4e-b334-9b84e55f8fef","resourceVersion":"800","creationTimestamp":"2024-01-16T02:23:33Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"6adb137abb6e7ac4dcf8e50e41a3773b","kubernetes.io/config.mirror":"6adb137abb6e7ac4dcf8e50e41a3773b","kubernetes.io/config.seen":"2024-01-16T02:23:33.032946146Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-835787","uid":"7ae74749-584e-4cbc-92c4-0e5f2539761e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:23:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7212 chars]
	I0116 02:33:59.790772  994955 request.go:629] Waited for 77.276849ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.50:8443/api/v1/nodes/multinode-835787
	I0116 02:33:59.790889  994955 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-835787
	I0116 02:33:59.790903  994955 round_trippers.go:469] Request Headers:
	I0116 02:33:59.790929  994955 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:33:59.790943  994955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:33:59.794260  994955 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 02:33:59.794290  994955 round_trippers.go:577] Response Headers:
	I0116 02:33:59.794300  994955 round_trippers.go:580]     Audit-Id: e15d60b9-7b9a-46c5-997c-062c5f47084c
	I0116 02:33:59.794309  994955 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:33:59.794318  994955 round_trippers.go:580]     Content-Type: application/json
	I0116 02:33:59.794326  994955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:33:59.794335  994955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:33:59.794343  994955 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:33:59 GMT
	I0116 02:33:59.794767  994955 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-835787","uid":"7ae74749-584e-4cbc-92c4-0e5f2539761e","resourceVersion":"759","creationTimestamp":"2024-01-16T02:23:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-835787","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-835787","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_23_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:23:29Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I0116 02:33:59.795101  994955 pod_ready.go:97] node "multinode-835787" hosting pod "kube-controller-manager-multinode-835787" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-835787" has status "Ready":"False"
	I0116 02:33:59.795126  994955 pod_ready.go:81] duration metric: took 84.298519ms waiting for pod "kube-controller-manager-multinode-835787" in "kube-system" namespace to be "Ready" ...
	E0116 02:33:59.795140  994955 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-835787" hosting pod "kube-controller-manager-multinode-835787" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-835787" has status "Ready":"False"
	I0116 02:33:59.795150  994955 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-fpdqr" in "kube-system" namespace to be "Ready" ...
	I0116 02:33:59.990626  994955 request.go:629] Waited for 195.394559ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fpdqr
	I0116 02:33:59.990718  994955 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fpdqr
	I0116 02:33:59.990726  994955 round_trippers.go:469] Request Headers:
	I0116 02:33:59.990734  994955 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:33:59.990742  994955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:33:59.993899  994955 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 02:33:59.993929  994955 round_trippers.go:577] Response Headers:
	I0116 02:33:59.993939  994955 round_trippers.go:580]     Audit-Id: 16273363-982e-4d23-b73b-67505470c2cb
	I0116 02:33:59.993948  994955 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:33:59.993955  994955 round_trippers.go:580]     Content-Type: application/json
	I0116 02:33:59.993961  994955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:33:59.993969  994955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:33:59.993976  994955 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:33:59 GMT
	I0116 02:33:59.994171  994955 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-fpdqr","generateName":"kube-proxy-","namespace":"kube-system","uid":"42b74cbd-93d8-4ac7-9071-112d5e7c572b","resourceVersion":"733","creationTimestamp":"2024-01-16T02:25:22Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"c1b2dbfd-aa15-48f4-b587-dbae275fb5c5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:25:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c1b2dbfd-aa15-48f4-b587-dbae275fb5c5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5526 chars]
	I0116 02:34:00.190122  994955 request.go:629] Waited for 195.398707ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.50:8443/api/v1/nodes/multinode-835787-m03
	I0116 02:34:00.190188  994955 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-835787-m03
	I0116 02:34:00.190193  994955 round_trippers.go:469] Request Headers:
	I0116 02:34:00.190201  994955 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:34:00.190223  994955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:34:00.193384  994955 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 02:34:00.193408  994955 round_trippers.go:577] Response Headers:
	I0116 02:34:00.193415  994955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:34:00.193421  994955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:34:00.193427  994955 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:34:00 GMT
	I0116 02:34:00.193435  994955 round_trippers.go:580]     Audit-Id: 2f602de4-fce0-4545-9e49-ca645a9f92a1
	I0116 02:34:00.193441  994955 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:34:00.193447  994955 round_trippers.go:580]     Content-Type: application/json
	I0116 02:34:00.193662  994955 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-835787-m03","uid":"67df5a31-bd76-4643-b628-d7570878cf19","resourceVersion":"760","creationTimestamp":"2024-01-16T02:26:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-835787-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-835787","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T02_26_05_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:26:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 4085 chars]
	I0116 02:34:00.194107  994955 pod_ready.go:92] pod "kube-proxy-fpdqr" in "kube-system" namespace has status "Ready":"True"
	I0116 02:34:00.194136  994955 pod_ready.go:81] duration metric: took 398.967423ms waiting for pod "kube-proxy-fpdqr" in "kube-system" namespace to be "Ready" ...
	I0116 02:34:00.194150  994955 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-gbvc2" in "kube-system" namespace to be "Ready" ...
	I0116 02:34:00.390168  994955 request.go:629] Waited for 195.939796ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gbvc2
	I0116 02:34:00.390251  994955 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gbvc2
	I0116 02:34:00.390259  994955 round_trippers.go:469] Request Headers:
	I0116 02:34:00.390269  994955 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:34:00.390300  994955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:34:00.393384  994955 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 02:34:00.393410  994955 round_trippers.go:577] Response Headers:
	I0116 02:34:00.393422  994955 round_trippers.go:580]     Content-Type: application/json
	I0116 02:34:00.393436  994955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:34:00.393444  994955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:34:00.393449  994955 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:34:00 GMT
	I0116 02:34:00.393454  994955 round_trippers.go:580]     Audit-Id: 4c4f04e6-8124-4e68-847c-e0743abde4fd
	I0116 02:34:00.393459  994955 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:34:00.393703  994955 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-gbvc2","generateName":"kube-proxy-","namespace":"kube-system","uid":"74d63696-cb46-484d-937b-8883e6f1df06","resourceVersion":"824","creationTimestamp":"2024-01-16T02:23:45Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"c1b2dbfd-aa15-48f4-b587-dbae275fb5c5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:23:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c1b2dbfd-aa15-48f4-b587-dbae275fb5c5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5514 chars]
	I0116 02:34:00.590662  994955 request.go:629] Waited for 196.370744ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.50:8443/api/v1/nodes/multinode-835787
	I0116 02:34:00.590777  994955 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-835787
	I0116 02:34:00.590788  994955 round_trippers.go:469] Request Headers:
	I0116 02:34:00.590799  994955 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:34:00.590811  994955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:34:00.593563  994955 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:34:00.593584  994955 round_trippers.go:577] Response Headers:
	I0116 02:34:00.593600  994955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:34:00.593609  994955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:34:00.593617  994955 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:34:00 GMT
	I0116 02:34:00.593626  994955 round_trippers.go:580]     Audit-Id: ef80f613-b7ec-467d-ad8a-25005368f799
	I0116 02:34:00.593635  994955 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:34:00.593644  994955 round_trippers.go:580]     Content-Type: application/json
	I0116 02:34:00.593921  994955 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-835787","uid":"7ae74749-584e-4cbc-92c4-0e5f2539761e","resourceVersion":"759","creationTimestamp":"2024-01-16T02:23:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-835787","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-835787","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_23_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:23:29Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I0116 02:34:00.594296  994955 pod_ready.go:97] node "multinode-835787" hosting pod "kube-proxy-gbvc2" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-835787" has status "Ready":"False"
	I0116 02:34:00.594317  994955 pod_ready.go:81] duration metric: took 400.157029ms waiting for pod "kube-proxy-gbvc2" in "kube-system" namespace to be "Ready" ...
	E0116 02:34:00.594326  994955 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-835787" hosting pod "kube-proxy-gbvc2" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-835787" has status "Ready":"False"
	I0116 02:34:00.594336  994955 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-hxx8p" in "kube-system" namespace to be "Ready" ...
	I0116 02:34:00.790420  994955 request.go:629] Waited for 195.980977ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hxx8p
	I0116 02:34:00.790513  994955 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hxx8p
	I0116 02:34:00.790518  994955 round_trippers.go:469] Request Headers:
	I0116 02:34:00.790527  994955 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:34:00.790536  994955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:34:00.793401  994955 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:34:00.793423  994955 round_trippers.go:577] Response Headers:
	I0116 02:34:00.793453  994955 round_trippers.go:580]     Content-Type: application/json
	I0116 02:34:00.793467  994955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:34:00.793476  994955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:34:00.793483  994955 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:34:00 GMT
	I0116 02:34:00.793488  994955 round_trippers.go:580]     Audit-Id: 269d802f-c937-455e-b340-73c36ea59650
	I0116 02:34:00.793495  994955 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:34:00.793622  994955 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-hxx8p","generateName":"kube-proxy-","namespace":"kube-system","uid":"9c35aa68-14ac-41e1-81f8-8fdb0c48d9f1","resourceVersion":"525","creationTimestamp":"2024-01-16T02:24:32Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"c1b2dbfd-aa15-48f4-b587-dbae275fb5c5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:24:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c1b2dbfd-aa15-48f4-b587-dbae275fb5c5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5522 chars]
	I0116 02:34:00.990548  994955 request.go:629] Waited for 196.39686ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.50:8443/api/v1/nodes/multinode-835787-m02
	I0116 02:34:00.990623  994955 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-835787-m02
	I0116 02:34:00.990629  994955 round_trippers.go:469] Request Headers:
	I0116 02:34:00.990637  994955 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:34:00.990643  994955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:34:00.993433  994955 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:34:00.993457  994955 round_trippers.go:577] Response Headers:
	I0116 02:34:00.993468  994955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:34:00.993476  994955 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:34:00 GMT
	I0116 02:34:00.993485  994955 round_trippers.go:580]     Audit-Id: d61c8b24-315c-4ed4-8fea-aba213e8c18f
	I0116 02:34:00.993496  994955 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:34:00.993504  994955 round_trippers.go:580]     Content-Type: application/json
	I0116 02:34:00.993512  994955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:34:00.993664  994955 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-835787-m02","uid":"c989eb4c-b30d-4969-ac6c-b1c11d2d8a5f","resourceVersion":"723","creationTimestamp":"2024-01-16T02:24:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-835787-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-835787","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T02_26_05_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:24:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 4235 chars]
	I0116 02:34:00.993969  994955 pod_ready.go:92] pod "kube-proxy-hxx8p" in "kube-system" namespace has status "Ready":"True"
	I0116 02:34:00.993988  994955 pod_ready.go:81] duration metric: took 399.643374ms waiting for pod "kube-proxy-hxx8p" in "kube-system" namespace to be "Ready" ...
	I0116 02:34:00.993998  994955 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-835787" in "kube-system" namespace to be "Ready" ...
	I0116 02:34:01.190260  994955 request.go:629] Waited for 196.183755ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-835787
	I0116 02:34:01.190364  994955 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-835787
	I0116 02:34:01.190386  994955 round_trippers.go:469] Request Headers:
	I0116 02:34:01.190398  994955 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:34:01.190408  994955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:34:01.193234  994955 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:34:01.193261  994955 round_trippers.go:577] Response Headers:
	I0116 02:34:01.193269  994955 round_trippers.go:580]     Audit-Id: 1d2c3e53-c834-4c6e-a8cd-9ce1b8577d84
	I0116 02:34:01.193275  994955 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:34:01.193281  994955 round_trippers.go:580]     Content-Type: application/json
	I0116 02:34:01.193292  994955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:34:01.193304  994955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:34:01.193309  994955 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:34:01 GMT
	I0116 02:34:01.193576  994955 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-835787","namespace":"kube-system","uid":"7b9c28cc-6e78-413a-af72-511714d2462e","resourceVersion":"801","creationTimestamp":"2024-01-16T02:23:33Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"230f2dad53142209ac2ae48ed27aa7b4","kubernetes.io/config.mirror":"230f2dad53142209ac2ae48ed27aa7b4","kubernetes.io/config.seen":"2024-01-16T02:23:33.032947019Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-835787","uid":"7ae74749-584e-4cbc-92c4-0e5f2539761e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:23:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4924 chars]
	I0116 02:34:01.390470  994955 request.go:629] Waited for 196.416631ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.50:8443/api/v1/nodes/multinode-835787
	I0116 02:34:01.390579  994955 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-835787
	I0116 02:34:01.390594  994955 round_trippers.go:469] Request Headers:
	I0116 02:34:01.390606  994955 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:34:01.390619  994955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:34:01.394154  994955 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 02:34:01.394190  994955 round_trippers.go:577] Response Headers:
	I0116 02:34:01.394203  994955 round_trippers.go:580]     Content-Type: application/json
	I0116 02:34:01.394212  994955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:34:01.394221  994955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:34:01.394230  994955 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:34:01 GMT
	I0116 02:34:01.394239  994955 round_trippers.go:580]     Audit-Id: 06a331c5-48c1-45f1-b3e6-ea1442f5393c
	I0116 02:34:01.394247  994955 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:34:01.394410  994955 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-835787","uid":"7ae74749-584e-4cbc-92c4-0e5f2539761e","resourceVersion":"759","creationTimestamp":"2024-01-16T02:23:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-835787","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-835787","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_23_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:23:29Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I0116 02:34:01.394793  994955 pod_ready.go:97] node "multinode-835787" hosting pod "kube-scheduler-multinode-835787" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-835787" has status "Ready":"False"
	I0116 02:34:01.394825  994955 pod_ready.go:81] duration metric: took 400.820021ms waiting for pod "kube-scheduler-multinode-835787" in "kube-system" namespace to be "Ready" ...
	E0116 02:34:01.394835  994955 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-835787" hosting pod "kube-scheduler-multinode-835787" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-835787" has status "Ready":"False"
	I0116 02:34:01.394845  994955 pod_ready.go:38] duration metric: took 1.709332989s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 02:34:01.394865  994955 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0116 02:34:01.407050  994955 command_runner.go:130] > -16
	I0116 02:34:01.407837  994955 ops.go:34] apiserver oom_adj: -16
	I0116 02:34:01.407858  994955 kubeadm.go:640] restartCluster took 22.850391625s
	I0116 02:34:01.407867  994955 kubeadm.go:406] StartCluster complete in 22.901761515s
	I0116 02:34:01.407890  994955 settings.go:142] acquiring lock: {Name:mk39c69fb5454efd5afab021c02c3bec1f1b4e6b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:34:01.407990  994955 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17967-971255/kubeconfig
	I0116 02:34:01.408831  994955 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17967-971255/kubeconfig: {Name:mk5ca3018274b0a03599cf3631785c0629f81a67 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:34:01.409117  994955 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0116 02:34:01.409274  994955 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0116 02:34:01.409447  994955 config.go:182] Loaded profile config "multinode-835787": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 02:34:01.412252  994955 out.go:177] * Enabled addons: 
	I0116 02:34:01.409462  994955 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17967-971255/kubeconfig
	I0116 02:34:01.413948  994955 addons.go:505] enable addons completed in 4.676123ms: enabled=[]
	I0116 02:34:01.414228  994955 kapi.go:59] client config for multinode-835787: &rest.Config{Host:"https://192.168.39.50:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17967-971255/.minikube/profiles/multinode-835787/client.crt", KeyFile:"/home/jenkins/minikube-integration/17967-971255/.minikube/profiles/multinode-835787/client.key", CAFile:"/home/jenkins/minikube-integration/17967-971255/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), N
extProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19be0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0116 02:34:01.414654  994955 round_trippers.go:463] GET https://192.168.39.50:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0116 02:34:01.414667  994955 round_trippers.go:469] Request Headers:
	I0116 02:34:01.414678  994955 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:34:01.414687  994955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:34:01.417597  994955 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:34:01.417614  994955 round_trippers.go:577] Response Headers:
	I0116 02:34:01.417623  994955 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:34:01 GMT
	I0116 02:34:01.417631  994955 round_trippers.go:580]     Audit-Id: 48c72e47-98e8-4517-83e7-0da4715aaad9
	I0116 02:34:01.417639  994955 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:34:01.417648  994955 round_trippers.go:580]     Content-Type: application/json
	I0116 02:34:01.417666  994955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:34:01.417676  994955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:34:01.417690  994955 round_trippers.go:580]     Content-Length: 291
	I0116 02:34:01.417747  994955 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"c3d1d02d-1d3d-4837-b3ba-04423f0d8104","resourceVersion":"842","creationTimestamp":"2024-01-16T02:23:32Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0116 02:34:01.417981  994955 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-835787" context rescaled to 1 replicas
	I0116 02:34:01.418054  994955 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.50 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0116 02:34:01.419996  994955 out.go:177] * Verifying Kubernetes components...
	I0116 02:34:01.421475  994955 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 02:34:01.519749  994955 command_runner.go:130] > apiVersion: v1
	I0116 02:34:01.519805  994955 command_runner.go:130] > data:
	I0116 02:34:01.519812  994955 command_runner.go:130] >   Corefile: |
	I0116 02:34:01.519816  994955 command_runner.go:130] >     .:53 {
	I0116 02:34:01.519820  994955 command_runner.go:130] >         log
	I0116 02:34:01.519825  994955 command_runner.go:130] >         errors
	I0116 02:34:01.519829  994955 command_runner.go:130] >         health {
	I0116 02:34:01.519834  994955 command_runner.go:130] >            lameduck 5s
	I0116 02:34:01.519838  994955 command_runner.go:130] >         }
	I0116 02:34:01.519843  994955 command_runner.go:130] >         ready
	I0116 02:34:01.519848  994955 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0116 02:34:01.519852  994955 command_runner.go:130] >            pods insecure
	I0116 02:34:01.519859  994955 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0116 02:34:01.519866  994955 command_runner.go:130] >            ttl 30
	I0116 02:34:01.519870  994955 command_runner.go:130] >         }
	I0116 02:34:01.519874  994955 command_runner.go:130] >         prometheus :9153
	I0116 02:34:01.519878  994955 command_runner.go:130] >         hosts {
	I0116 02:34:01.519885  994955 command_runner.go:130] >            192.168.39.1 host.minikube.internal
	I0116 02:34:01.519890  994955 command_runner.go:130] >            fallthrough
	I0116 02:34:01.519899  994955 command_runner.go:130] >         }
	I0116 02:34:01.519911  994955 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0116 02:34:01.519920  994955 command_runner.go:130] >            max_concurrent 1000
	I0116 02:34:01.519930  994955 command_runner.go:130] >         }
	I0116 02:34:01.519933  994955 command_runner.go:130] >         cache 30
	I0116 02:34:01.519941  994955 command_runner.go:130] >         loop
	I0116 02:34:01.519945  994955 command_runner.go:130] >         reload
	I0116 02:34:01.519949  994955 command_runner.go:130] >         loadbalance
	I0116 02:34:01.519953  994955 command_runner.go:130] >     }
	I0116 02:34:01.519957  994955 command_runner.go:130] > kind: ConfigMap
	I0116 02:34:01.519961  994955 command_runner.go:130] > metadata:
	I0116 02:34:01.519966  994955 command_runner.go:130] >   creationTimestamp: "2024-01-16T02:23:32Z"
	I0116 02:34:01.519973  994955 command_runner.go:130] >   name: coredns
	I0116 02:34:01.519977  994955 command_runner.go:130] >   namespace: kube-system
	I0116 02:34:01.519981  994955 command_runner.go:130] >   resourceVersion: "402"
	I0116 02:34:01.519986  994955 command_runner.go:130] >   uid: 5d0b97bf-0e87-435e-a3ac-c1a3ea5ab870
	I0116 02:34:01.520073  994955 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0116 02:34:01.520086  994955 node_ready.go:35] waiting up to 6m0s for node "multinode-835787" to be "Ready" ...
	I0116 02:34:01.590465  994955 request.go:629] Waited for 70.260705ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.50:8443/api/v1/nodes/multinode-835787
	I0116 02:34:01.590547  994955 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-835787
	I0116 02:34:01.590552  994955 round_trippers.go:469] Request Headers:
	I0116 02:34:01.590585  994955 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:34:01.590601  994955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:34:01.593688  994955 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 02:34:01.593718  994955 round_trippers.go:577] Response Headers:
	I0116 02:34:01.593729  994955 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:34:01 GMT
	I0116 02:34:01.593737  994955 round_trippers.go:580]     Audit-Id: 3e63e8d8-7ceb-40fb-bb1a-fbd83b776e8a
	I0116 02:34:01.593743  994955 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:34:01.593751  994955 round_trippers.go:580]     Content-Type: application/json
	I0116 02:34:01.593759  994955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:34:01.593770  994955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:34:01.594309  994955 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-835787","uid":"7ae74749-584e-4cbc-92c4-0e5f2539761e","resourceVersion":"759","creationTimestamp":"2024-01-16T02:23:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-835787","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-835787","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_23_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:23:29Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I0116 02:34:02.020978  994955 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-835787
	I0116 02:34:02.021007  994955 round_trippers.go:469] Request Headers:
	I0116 02:34:02.021015  994955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:34:02.021022  994955 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:34:02.023913  994955 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:34:02.023939  994955 round_trippers.go:577] Response Headers:
	I0116 02:34:02.023950  994955 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:34:02.023960  994955 round_trippers.go:580]     Content-Type: application/json
	I0116 02:34:02.023968  994955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:34:02.023976  994955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:34:02.023985  994955 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:34:01 GMT
	I0116 02:34:02.023997  994955 round_trippers.go:580]     Audit-Id: 201af485-1089-492b-9985-7b04f5374cd7
	I0116 02:34:02.024142  994955 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-835787","uid":"7ae74749-584e-4cbc-92c4-0e5f2539761e","resourceVersion":"759","creationTimestamp":"2024-01-16T02:23:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-835787","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-835787","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_23_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:23:29Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I0116 02:34:02.521317  994955 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-835787
	I0116 02:34:02.521353  994955 round_trippers.go:469] Request Headers:
	I0116 02:34:02.521362  994955 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:34:02.521368  994955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:34:02.524173  994955 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:34:02.524202  994955 round_trippers.go:577] Response Headers:
	I0116 02:34:02.524214  994955 round_trippers.go:580]     Audit-Id: 9bd0cf51-7e68-48b7-b2a9-e681301f4b21
	I0116 02:34:02.524223  994955 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:34:02.524230  994955 round_trippers.go:580]     Content-Type: application/json
	I0116 02:34:02.524238  994955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:34:02.524245  994955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:34:02.524253  994955 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:34:02 GMT
	I0116 02:34:02.524505  994955 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-835787","uid":"7ae74749-584e-4cbc-92c4-0e5f2539761e","resourceVersion":"759","creationTimestamp":"2024-01-16T02:23:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-835787","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-835787","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_23_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:23:29Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I0116 02:34:03.021369  994955 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-835787
	I0116 02:34:03.021404  994955 round_trippers.go:469] Request Headers:
	I0116 02:34:03.021416  994955 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:34:03.021423  994955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:34:03.024485  994955 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 02:34:03.024518  994955 round_trippers.go:577] Response Headers:
	I0116 02:34:03.024541  994955 round_trippers.go:580]     Audit-Id: 4b67b51c-cf03-43cf-a6f1-0a4f7e726ad6
	I0116 02:34:03.024547  994955 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:34:03.024552  994955 round_trippers.go:580]     Content-Type: application/json
	I0116 02:34:03.024557  994955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:34:03.024562  994955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:34:03.024567  994955 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:34:03 GMT
	I0116 02:34:03.025061  994955 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-835787","uid":"7ae74749-584e-4cbc-92c4-0e5f2539761e","resourceVersion":"759","creationTimestamp":"2024-01-16T02:23:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-835787","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-835787","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_23_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:23:29Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I0116 02:34:03.520747  994955 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-835787
	I0116 02:34:03.520776  994955 round_trippers.go:469] Request Headers:
	I0116 02:34:03.520785  994955 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:34:03.520792  994955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:34:03.523429  994955 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:34:03.523459  994955 round_trippers.go:577] Response Headers:
	I0116 02:34:03.523470  994955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:34:03.523478  994955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:34:03.523486  994955 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:34:03 GMT
	I0116 02:34:03.523495  994955 round_trippers.go:580]     Audit-Id: 8b3b2ed0-3410-42a9-beaf-591c5fde9219
	I0116 02:34:03.523503  994955 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:34:03.523529  994955 round_trippers.go:580]     Content-Type: application/json
	I0116 02:34:03.524056  994955 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-835787","uid":"7ae74749-584e-4cbc-92c4-0e5f2539761e","resourceVersion":"759","creationTimestamp":"2024-01-16T02:23:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-835787","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-835787","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_23_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:23:29Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I0116 02:34:03.524494  994955 node_ready.go:58] node "multinode-835787" has status "Ready":"False"
	I0116 02:34:04.020691  994955 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-835787
	I0116 02:34:04.020715  994955 round_trippers.go:469] Request Headers:
	I0116 02:34:04.020724  994955 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:34:04.020730  994955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:34:04.023609  994955 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:34:04.023635  994955 round_trippers.go:577] Response Headers:
	I0116 02:34:04.023647  994955 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:34:04.023656  994955 round_trippers.go:580]     Content-Type: application/json
	I0116 02:34:04.023665  994955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:34:04.023675  994955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:34:04.023686  994955 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:34:04 GMT
	I0116 02:34:04.023694  994955 round_trippers.go:580]     Audit-Id: ac0f9aa0-0d78-4b98-b702-64db2bbf0b4d
	I0116 02:34:04.024058  994955 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-835787","uid":"7ae74749-584e-4cbc-92c4-0e5f2539761e","resourceVersion":"759","creationTimestamp":"2024-01-16T02:23:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-835787","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-835787","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_23_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:23:29Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I0116 02:34:04.521212  994955 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-835787
	I0116 02:34:04.521247  994955 round_trippers.go:469] Request Headers:
	I0116 02:34:04.521256  994955 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:34:04.521262  994955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:34:04.524228  994955 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:34:04.524263  994955 round_trippers.go:577] Response Headers:
	I0116 02:34:04.524276  994955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:34:04.524286  994955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:34:04.524295  994955 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:34:04 GMT
	I0116 02:34:04.524305  994955 round_trippers.go:580]     Audit-Id: cb8c7247-42d3-4648-a292-a3602ee59a13
	I0116 02:34:04.524315  994955 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:34:04.524325  994955 round_trippers.go:580]     Content-Type: application/json
	I0116 02:34:04.524493  994955 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-835787","uid":"7ae74749-584e-4cbc-92c4-0e5f2539761e","resourceVersion":"759","creationTimestamp":"2024-01-16T02:23:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-835787","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-835787","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_23_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:23:29Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I0116 02:34:05.021111  994955 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-835787
	I0116 02:34:05.021138  994955 round_trippers.go:469] Request Headers:
	I0116 02:34:05.021146  994955 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:34:05.021153  994955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:34:05.025103  994955 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 02:34:05.025132  994955 round_trippers.go:577] Response Headers:
	I0116 02:34:05.025141  994955 round_trippers.go:580]     Audit-Id: 82892f0a-2094-4cf9-b059-dc97fc5a2a2a
	I0116 02:34:05.025150  994955 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:34:05.025156  994955 round_trippers.go:580]     Content-Type: application/json
	I0116 02:34:05.025163  994955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:34:05.025170  994955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:34:05.025182  994955 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:34:05 GMT
	I0116 02:34:05.025621  994955 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-835787","uid":"7ae74749-584e-4cbc-92c4-0e5f2539761e","resourceVersion":"759","creationTimestamp":"2024-01-16T02:23:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-835787","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-835787","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_23_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:23:29Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I0116 02:34:05.520340  994955 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-835787
	I0116 02:34:05.520379  994955 round_trippers.go:469] Request Headers:
	I0116 02:34:05.520392  994955 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:34:05.520427  994955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:34:05.525898  994955 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0116 02:34:05.525932  994955 round_trippers.go:577] Response Headers:
	I0116 02:34:05.525943  994955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:34:05.525952  994955 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:34:05 GMT
	I0116 02:34:05.525964  994955 round_trippers.go:580]     Audit-Id: f2c34a3f-5382-4f4f-b0dc-917269a69c74
	I0116 02:34:05.525972  994955 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:34:05.525983  994955 round_trippers.go:580]     Content-Type: application/json
	I0116 02:34:05.525992  994955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:34:05.526471  994955 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-835787","uid":"7ae74749-584e-4cbc-92c4-0e5f2539761e","resourceVersion":"759","creationTimestamp":"2024-01-16T02:23:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-835787","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-835787","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_23_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:23:29Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I0116 02:34:05.526945  994955 node_ready.go:58] node "multinode-835787" has status "Ready":"False"
	I0116 02:34:06.021185  994955 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-835787
	I0116 02:34:06.021221  994955 round_trippers.go:469] Request Headers:
	I0116 02:34:06.021230  994955 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:34:06.021237  994955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:34:06.025214  994955 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 02:34:06.025247  994955 round_trippers.go:577] Response Headers:
	I0116 02:34:06.025258  994955 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:34:06 GMT
	I0116 02:34:06.025268  994955 round_trippers.go:580]     Audit-Id: b5582d54-1f37-416e-9882-c3e897209858
	I0116 02:34:06.025301  994955 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:34:06.025315  994955 round_trippers.go:580]     Content-Type: application/json
	I0116 02:34:06.025325  994955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:34:06.025337  994955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:34:06.026583  994955 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-835787","uid":"7ae74749-584e-4cbc-92c4-0e5f2539761e","resourceVersion":"759","creationTimestamp":"2024-01-16T02:23:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-835787","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-835787","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_23_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:23:29Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I0116 02:34:06.520320  994955 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-835787
	I0116 02:34:06.520355  994955 round_trippers.go:469] Request Headers:
	I0116 02:34:06.520364  994955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:34:06.520371  994955 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:34:06.523589  994955 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 02:34:06.523628  994955 round_trippers.go:577] Response Headers:
	I0116 02:34:06.523638  994955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:34:06.523647  994955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:34:06.523655  994955 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:34:06 GMT
	I0116 02:34:06.523663  994955 round_trippers.go:580]     Audit-Id: 083bbd72-601b-44f8-99e6-e49a08297434
	I0116 02:34:06.523675  994955 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:34:06.523683  994955 round_trippers.go:580]     Content-Type: application/json
	I0116 02:34:06.523926  994955 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-835787","uid":"7ae74749-584e-4cbc-92c4-0e5f2539761e","resourceVersion":"759","creationTimestamp":"2024-01-16T02:23:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-835787","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-835787","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_23_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:23:29Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I0116 02:34:07.020624  994955 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-835787
	I0116 02:34:07.020661  994955 round_trippers.go:469] Request Headers:
	I0116 02:34:07.020670  994955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:34:07.020677  994955 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:34:07.023770  994955 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 02:34:07.023805  994955 round_trippers.go:577] Response Headers:
	I0116 02:34:07.023816  994955 round_trippers.go:580]     Audit-Id: 4faf841b-f14e-4232-b715-1f6145d41024
	I0116 02:34:07.023824  994955 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:34:07.023831  994955 round_trippers.go:580]     Content-Type: application/json
	I0116 02:34:07.023838  994955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:34:07.023845  994955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:34:07.023852  994955 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:34:07 GMT
	I0116 02:34:07.024040  994955 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-835787","uid":"7ae74749-584e-4cbc-92c4-0e5f2539761e","resourceVersion":"886","creationTimestamp":"2024-01-16T02:23:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-835787","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-835787","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_23_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:23:29Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0116 02:34:07.024524  994955 node_ready.go:49] node "multinode-835787" has status "Ready":"True"
	I0116 02:34:07.024555  994955 node_ready.go:38] duration metric: took 5.504442547s waiting for node "multinode-835787" to be "Ready" ...
	I0116 02:34:07.024567  994955 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 02:34:07.024652  994955 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods
	I0116 02:34:07.024662  994955 round_trippers.go:469] Request Headers:
	I0116 02:34:07.024675  994955 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:34:07.024681  994955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:34:07.030034  994955 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0116 02:34:07.030063  994955 round_trippers.go:577] Response Headers:
	I0116 02:34:07.030075  994955 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:34:07 GMT
	I0116 02:34:07.030083  994955 round_trippers.go:580]     Audit-Id: 6d093241-1676-473b-aa69-3911ebfb9f69
	I0116 02:34:07.030091  994955 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:34:07.030099  994955 round_trippers.go:580]     Content-Type: application/json
	I0116 02:34:07.030107  994955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:34:07.030115  994955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:34:07.032305  994955 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"888"},"items":[{"metadata":{"name":"coredns-5dd5756b68-965sn","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"a0898f09-1a64-4beb-bfbf-de15f2e07038","resourceVersion":"806","creationTimestamp":"2024-01-16T02:23:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a765b12d-1df2-4d20-a0f7-7471371f2fe7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:23:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a765b12d-1df2-4d20-a0f7-7471371f2fe7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 82462 chars]
	I0116 02:34:07.035247  994955 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-965sn" in "kube-system" namespace to be "Ready" ...
	I0116 02:34:07.035347  994955 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-965sn
	I0116 02:34:07.035357  994955 round_trippers.go:469] Request Headers:
	I0116 02:34:07.035365  994955 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:34:07.035371  994955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:34:07.039653  994955 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0116 02:34:07.039681  994955 round_trippers.go:577] Response Headers:
	I0116 02:34:07.039692  994955 round_trippers.go:580]     Audit-Id: 26e5938d-91da-4389-8852-501cc77626a7
	I0116 02:34:07.039700  994955 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:34:07.039709  994955 round_trippers.go:580]     Content-Type: application/json
	I0116 02:34:07.039721  994955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:34:07.039730  994955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:34:07.039741  994955 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:34:07 GMT
	I0116 02:34:07.039955  994955 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-965sn","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"a0898f09-1a64-4beb-bfbf-de15f2e07038","resourceVersion":"806","creationTimestamp":"2024-01-16T02:23:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a765b12d-1df2-4d20-a0f7-7471371f2fe7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:23:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a765b12d-1df2-4d20-a0f7-7471371f2fe7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I0116 02:34:07.040528  994955 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-835787
	I0116 02:34:07.040556  994955 round_trippers.go:469] Request Headers:
	I0116 02:34:07.040568  994955 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:34:07.040577  994955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:34:07.042759  994955 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:34:07.042782  994955 round_trippers.go:577] Response Headers:
	I0116 02:34:07.042792  994955 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:34:07 GMT
	I0116 02:34:07.042801  994955 round_trippers.go:580]     Audit-Id: 888c7240-d057-4347-8687-526db5624a38
	I0116 02:34:07.042810  994955 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:34:07.042818  994955 round_trippers.go:580]     Content-Type: application/json
	I0116 02:34:07.042827  994955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:34:07.042836  994955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:34:07.042987  994955 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-835787","uid":"7ae74749-584e-4cbc-92c4-0e5f2539761e","resourceVersion":"886","creationTimestamp":"2024-01-16T02:23:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-835787","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-835787","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_23_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:23:29Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0116 02:34:07.536240  994955 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-965sn
	I0116 02:34:07.536271  994955 round_trippers.go:469] Request Headers:
	I0116 02:34:07.536283  994955 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:34:07.536289  994955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:34:07.539192  994955 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:34:07.539217  994955 round_trippers.go:577] Response Headers:
	I0116 02:34:07.539228  994955 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:34:07.539235  994955 round_trippers.go:580]     Content-Type: application/json
	I0116 02:34:07.539243  994955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:34:07.539250  994955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:34:07.539258  994955 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:34:07 GMT
	I0116 02:34:07.539266  994955 round_trippers.go:580]     Audit-Id: 9416ea3b-4565-4031-92d6-6e85e38999a2
	I0116 02:34:07.539469  994955 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-965sn","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"a0898f09-1a64-4beb-bfbf-de15f2e07038","resourceVersion":"806","creationTimestamp":"2024-01-16T02:23:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a765b12d-1df2-4d20-a0f7-7471371f2fe7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:23:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a765b12d-1df2-4d20-a0f7-7471371f2fe7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I0116 02:34:07.540055  994955 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-835787
	I0116 02:34:07.540072  994955 round_trippers.go:469] Request Headers:
	I0116 02:34:07.540079  994955 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:34:07.540085  994955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:34:07.542158  994955 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:34:07.542179  994955 round_trippers.go:577] Response Headers:
	I0116 02:34:07.542186  994955 round_trippers.go:580]     Audit-Id: b670ea19-b939-49af-b212-63be9bf8a2fb
	I0116 02:34:07.542192  994955 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:34:07.542197  994955 round_trippers.go:580]     Content-Type: application/json
	I0116 02:34:07.542205  994955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:34:07.542213  994955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:34:07.542221  994955 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:34:07 GMT
	I0116 02:34:07.542750  994955 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-835787","uid":"7ae74749-584e-4cbc-92c4-0e5f2539761e","resourceVersion":"886","creationTimestamp":"2024-01-16T02:23:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-835787","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-835787","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_23_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:23:29Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0116 02:34:08.036496  994955 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-965sn
	I0116 02:34:08.036525  994955 round_trippers.go:469] Request Headers:
	I0116 02:34:08.036534  994955 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:34:08.036540  994955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:34:08.039788  994955 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 02:34:08.039807  994955 round_trippers.go:577] Response Headers:
	I0116 02:34:08.039814  994955 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:34:08 GMT
	I0116 02:34:08.039820  994955 round_trippers.go:580]     Audit-Id: 7e88960f-a484-4315-af0a-6357169f2d69
	I0116 02:34:08.039826  994955 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:34:08.039834  994955 round_trippers.go:580]     Content-Type: application/json
	I0116 02:34:08.039846  994955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:34:08.039855  994955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:34:08.040086  994955 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-965sn","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"a0898f09-1a64-4beb-bfbf-de15f2e07038","resourceVersion":"806","creationTimestamp":"2024-01-16T02:23:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a765b12d-1df2-4d20-a0f7-7471371f2fe7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:23:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a765b12d-1df2-4d20-a0f7-7471371f2fe7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I0116 02:34:08.040539  994955 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-835787
	I0116 02:34:08.040552  994955 round_trippers.go:469] Request Headers:
	I0116 02:34:08.040559  994955 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:34:08.040565  994955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:34:08.043718  994955 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 02:34:08.043736  994955 round_trippers.go:577] Response Headers:
	I0116 02:34:08.043742  994955 round_trippers.go:580]     Audit-Id: 537717e4-1278-406a-9cc2-2f8115db47e3
	I0116 02:34:08.043749  994955 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:34:08.043757  994955 round_trippers.go:580]     Content-Type: application/json
	I0116 02:34:08.043765  994955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:34:08.043774  994955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:34:08.043783  994955 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:34:08 GMT
	I0116 02:34:08.043889  994955 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-835787","uid":"7ae74749-584e-4cbc-92c4-0e5f2539761e","resourceVersion":"886","creationTimestamp":"2024-01-16T02:23:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-835787","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-835787","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_23_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:23:29Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0116 02:34:08.535532  994955 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-965sn
	I0116 02:34:08.535564  994955 round_trippers.go:469] Request Headers:
	I0116 02:34:08.535573  994955 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:34:08.535579  994955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:34:08.538929  994955 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 02:34:08.538956  994955 round_trippers.go:577] Response Headers:
	I0116 02:34:08.538967  994955 round_trippers.go:580]     Audit-Id: f20045c5-9752-4274-a102-d4232d420b81
	I0116 02:34:08.538976  994955 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:34:08.538985  994955 round_trippers.go:580]     Content-Type: application/json
	I0116 02:34:08.538993  994955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:34:08.539002  994955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:34:08.539009  994955 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:34:08 GMT
	I0116 02:34:08.539275  994955 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-965sn","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"a0898f09-1a64-4beb-bfbf-de15f2e07038","resourceVersion":"806","creationTimestamp":"2024-01-16T02:23:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a765b12d-1df2-4d20-a0f7-7471371f2fe7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:23:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a765b12d-1df2-4d20-a0f7-7471371f2fe7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I0116 02:34:08.539760  994955 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-835787
	I0116 02:34:08.539777  994955 round_trippers.go:469] Request Headers:
	I0116 02:34:08.539784  994955 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:34:08.539792  994955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:34:08.543197  994955 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 02:34:08.543226  994955 round_trippers.go:577] Response Headers:
	I0116 02:34:08.543236  994955 round_trippers.go:580]     Content-Type: application/json
	I0116 02:34:08.543244  994955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:34:08.543251  994955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:34:08.543258  994955 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:34:08 GMT
	I0116 02:34:08.543266  994955 round_trippers.go:580]     Audit-Id: 4866c761-1252-4173-8de8-f186126fa262
	I0116 02:34:08.543273  994955 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:34:08.543568  994955 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-835787","uid":"7ae74749-584e-4cbc-92c4-0e5f2539761e","resourceVersion":"886","creationTimestamp":"2024-01-16T02:23:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-835787","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-835787","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_23_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:23:29Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0116 02:34:09.035611  994955 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-965sn
	I0116 02:34:09.035644  994955 round_trippers.go:469] Request Headers:
	I0116 02:34:09.035654  994955 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:34:09.035660  994955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:34:09.039095  994955 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 02:34:09.039131  994955 round_trippers.go:577] Response Headers:
	I0116 02:34:09.039139  994955 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:34:09.039145  994955 round_trippers.go:580]     Content-Type: application/json
	I0116 02:34:09.039151  994955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:34:09.039157  994955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:34:09.039162  994955 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:34:09 GMT
	I0116 02:34:09.039167  994955 round_trippers.go:580]     Audit-Id: f73240af-0eb3-425a-b783-95014c98a337
	I0116 02:34:09.039356  994955 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-965sn","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"a0898f09-1a64-4beb-bfbf-de15f2e07038","resourceVersion":"806","creationTimestamp":"2024-01-16T02:23:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a765b12d-1df2-4d20-a0f7-7471371f2fe7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:23:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a765b12d-1df2-4d20-a0f7-7471371f2fe7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I0116 02:34:09.040036  994955 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-835787
	I0116 02:34:09.040065  994955 round_trippers.go:469] Request Headers:
	I0116 02:34:09.040077  994955 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:34:09.040086  994955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:34:09.042400  994955 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:34:09.042425  994955 round_trippers.go:577] Response Headers:
	I0116 02:34:09.042432  994955 round_trippers.go:580]     Audit-Id: 1f3665ad-55a2-4df1-97cd-9d8a35c1a6af
	I0116 02:34:09.042438  994955 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:34:09.042443  994955 round_trippers.go:580]     Content-Type: application/json
	I0116 02:34:09.042449  994955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:34:09.042455  994955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:34:09.042464  994955 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:34:09 GMT
	I0116 02:34:09.042600  994955 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-835787","uid":"7ae74749-584e-4cbc-92c4-0e5f2539761e","resourceVersion":"886","creationTimestamp":"2024-01-16T02:23:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-835787","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-835787","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_23_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:23:29Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0116 02:34:09.042969  994955 pod_ready.go:102] pod "coredns-5dd5756b68-965sn" in "kube-system" namespace has status "Ready":"False"
	I0116 02:34:09.535517  994955 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-965sn
	I0116 02:34:09.535549  994955 round_trippers.go:469] Request Headers:
	I0116 02:34:09.535560  994955 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:34:09.535568  994955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:34:09.538127  994955 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:34:09.538156  994955 round_trippers.go:577] Response Headers:
	I0116 02:34:09.538168  994955 round_trippers.go:580]     Audit-Id: 2a874bd2-52f5-4d33-a5ef-b3a3b4e0c45c
	I0116 02:34:09.538184  994955 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:34:09.538192  994955 round_trippers.go:580]     Content-Type: application/json
	I0116 02:34:09.538201  994955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:34:09.538206  994955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:34:09.538212  994955 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:34:09 GMT
	I0116 02:34:09.538430  994955 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-965sn","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"a0898f09-1a64-4beb-bfbf-de15f2e07038","resourceVersion":"806","creationTimestamp":"2024-01-16T02:23:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a765b12d-1df2-4d20-a0f7-7471371f2fe7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:23:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a765b12d-1df2-4d20-a0f7-7471371f2fe7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I0116 02:34:09.538951  994955 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-835787
	I0116 02:34:09.538967  994955 round_trippers.go:469] Request Headers:
	I0116 02:34:09.538977  994955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:34:09.538987  994955 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:34:09.541320  994955 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:34:09.541343  994955 round_trippers.go:577] Response Headers:
	I0116 02:34:09.541350  994955 round_trippers.go:580]     Audit-Id: 2df2645c-afb9-46b7-aab5-c0d1d97774bd
	I0116 02:34:09.541355  994955 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:34:09.541360  994955 round_trippers.go:580]     Content-Type: application/json
	I0116 02:34:09.541365  994955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:34:09.541370  994955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:34:09.541375  994955 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:34:09 GMT
	I0116 02:34:09.541588  994955 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-835787","uid":"7ae74749-584e-4cbc-92c4-0e5f2539761e","resourceVersion":"886","creationTimestamp":"2024-01-16T02:23:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-835787","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-835787","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_23_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:23:29Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0116 02:34:10.036275  994955 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-965sn
	I0116 02:34:10.036308  994955 round_trippers.go:469] Request Headers:
	I0116 02:34:10.036322  994955 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:34:10.036332  994955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:34:10.039088  994955 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:34:10.039121  994955 round_trippers.go:577] Response Headers:
	I0116 02:34:10.039131  994955 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:34:10.039145  994955 round_trippers.go:580]     Content-Type: application/json
	I0116 02:34:10.039156  994955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:34:10.039180  994955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:34:10.039191  994955 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:34:10 GMT
	I0116 02:34:10.039201  994955 round_trippers.go:580]     Audit-Id: 3898c75b-383d-4c7b-a661-fd80d01c4b5a
	I0116 02:34:10.039395  994955 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-965sn","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"a0898f09-1a64-4beb-bfbf-de15f2e07038","resourceVersion":"806","creationTimestamp":"2024-01-16T02:23:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a765b12d-1df2-4d20-a0f7-7471371f2fe7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:23:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a765b12d-1df2-4d20-a0f7-7471371f2fe7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I0116 02:34:10.040054  994955 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-835787
	I0116 02:34:10.040081  994955 round_trippers.go:469] Request Headers:
	I0116 02:34:10.040093  994955 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:34:10.040102  994955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:34:10.042962  994955 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:34:10.042980  994955 round_trippers.go:577] Response Headers:
	I0116 02:34:10.042987  994955 round_trippers.go:580]     Audit-Id: ed3c070c-627d-47b8-98e2-856c5d3eb595
	I0116 02:34:10.042992  994955 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:34:10.042999  994955 round_trippers.go:580]     Content-Type: application/json
	I0116 02:34:10.043007  994955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:34:10.043018  994955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:34:10.043028  994955 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:34:10 GMT
	I0116 02:34:10.043152  994955 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-835787","uid":"7ae74749-584e-4cbc-92c4-0e5f2539761e","resourceVersion":"886","creationTimestamp":"2024-01-16T02:23:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-835787","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-835787","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_23_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:23:29Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0116 02:34:10.535579  994955 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-965sn
	I0116 02:34:10.535608  994955 round_trippers.go:469] Request Headers:
	I0116 02:34:10.535617  994955 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:34:10.535623  994955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:34:10.538485  994955 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:34:10.538521  994955 round_trippers.go:577] Response Headers:
	I0116 02:34:10.538529  994955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:34:10.538535  994955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:34:10.538553  994955 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:34:10 GMT
	I0116 02:34:10.538565  994955 round_trippers.go:580]     Audit-Id: 0c57fd58-8802-4416-ad7b-83a817ba379a
	I0116 02:34:10.538575  994955 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:34:10.538587  994955 round_trippers.go:580]     Content-Type: application/json
	I0116 02:34:10.538940  994955 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-965sn","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"a0898f09-1a64-4beb-bfbf-de15f2e07038","resourceVersion":"806","creationTimestamp":"2024-01-16T02:23:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a765b12d-1df2-4d20-a0f7-7471371f2fe7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:23:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a765b12d-1df2-4d20-a0f7-7471371f2fe7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I0116 02:34:10.539438  994955 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-835787
	I0116 02:34:10.539458  994955 round_trippers.go:469] Request Headers:
	I0116 02:34:10.539467  994955 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:34:10.539473  994955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:34:10.541720  994955 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:34:10.541739  994955 round_trippers.go:577] Response Headers:
	I0116 02:34:10.541747  994955 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:34:10.541753  994955 round_trippers.go:580]     Content-Type: application/json
	I0116 02:34:10.541758  994955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:34:10.541764  994955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:34:10.541775  994955 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:34:10 GMT
	I0116 02:34:10.541784  994955 round_trippers.go:580]     Audit-Id: 0b7c30d4-73df-46a6-8379-86a8c344f5fb
	I0116 02:34:10.541925  994955 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-835787","uid":"7ae74749-584e-4cbc-92c4-0e5f2539761e","resourceVersion":"886","creationTimestamp":"2024-01-16T02:23:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-835787","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-835787","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_23_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:23:29Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0116 02:34:11.035582  994955 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-965sn
	I0116 02:34:11.035620  994955 round_trippers.go:469] Request Headers:
	I0116 02:34:11.035631  994955 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:34:11.035639  994955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:34:11.039373  994955 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 02:34:11.039399  994955 round_trippers.go:577] Response Headers:
	I0116 02:34:11.039409  994955 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:34:11.039418  994955 round_trippers.go:580]     Content-Type: application/json
	I0116 02:34:11.039426  994955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:34:11.039434  994955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:34:11.039441  994955 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:34:11 GMT
	I0116 02:34:11.039449  994955 round_trippers.go:580]     Audit-Id: 1f1bdf7b-d062-461b-b358-49e0e6512f1d
	I0116 02:34:11.039590  994955 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-965sn","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"a0898f09-1a64-4beb-bfbf-de15f2e07038","resourceVersion":"806","creationTimestamp":"2024-01-16T02:23:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a765b12d-1df2-4d20-a0f7-7471371f2fe7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:23:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a765b12d-1df2-4d20-a0f7-7471371f2fe7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I0116 02:34:11.040072  994955 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-835787
	I0116 02:34:11.040088  994955 round_trippers.go:469] Request Headers:
	I0116 02:34:11.040096  994955 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:34:11.040101  994955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:34:11.042806  994955 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:34:11.042825  994955 round_trippers.go:577] Response Headers:
	I0116 02:34:11.042833  994955 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:34:11.042838  994955 round_trippers.go:580]     Content-Type: application/json
	I0116 02:34:11.042845  994955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:34:11.042853  994955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:34:11.042862  994955 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:34:11 GMT
	I0116 02:34:11.042875  994955 round_trippers.go:580]     Audit-Id: fcf13e98-4448-4534-87cc-abb7a5cab922
	I0116 02:34:11.043059  994955 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-835787","uid":"7ae74749-584e-4cbc-92c4-0e5f2539761e","resourceVersion":"886","creationTimestamp":"2024-01-16T02:23:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-835787","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-835787","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_23_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:23:29Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0116 02:34:11.043396  994955 pod_ready.go:102] pod "coredns-5dd5756b68-965sn" in "kube-system" namespace has status "Ready":"False"
	I0116 02:34:11.535673  994955 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-965sn
	I0116 02:34:11.535701  994955 round_trippers.go:469] Request Headers:
	I0116 02:34:11.535709  994955 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:34:11.535716  994955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:34:11.539138  994955 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 02:34:11.539169  994955 round_trippers.go:577] Response Headers:
	I0116 02:34:11.539182  994955 round_trippers.go:580]     Audit-Id: 9c0d6cb3-6965-47bb-b876-b80f37fe0bcd
	I0116 02:34:11.539190  994955 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:34:11.539207  994955 round_trippers.go:580]     Content-Type: application/json
	I0116 02:34:11.539215  994955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:34:11.539223  994955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:34:11.539231  994955 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:34:11 GMT
	I0116 02:34:11.539956  994955 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-965sn","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"a0898f09-1a64-4beb-bfbf-de15f2e07038","resourceVersion":"806","creationTimestamp":"2024-01-16T02:23:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a765b12d-1df2-4d20-a0f7-7471371f2fe7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:23:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a765b12d-1df2-4d20-a0f7-7471371f2fe7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I0116 02:34:11.540444  994955 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-835787
	I0116 02:34:11.540460  994955 round_trippers.go:469] Request Headers:
	I0116 02:34:11.540468  994955 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:34:11.540474  994955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:34:11.543905  994955 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 02:34:11.543928  994955 round_trippers.go:577] Response Headers:
	I0116 02:34:11.543938  994955 round_trippers.go:580]     Audit-Id: f20dd108-f792-43d4-a9bc-3912170d092a
	I0116 02:34:11.543951  994955 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:34:11.543959  994955 round_trippers.go:580]     Content-Type: application/json
	I0116 02:34:11.543966  994955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:34:11.543973  994955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:34:11.543981  994955 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:34:11 GMT
	I0116 02:34:11.544149  994955 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-835787","uid":"7ae74749-584e-4cbc-92c4-0e5f2539761e","resourceVersion":"886","creationTimestamp":"2024-01-16T02:23:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-835787","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-835787","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_23_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:23:29Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0116 02:34:12.035792  994955 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-965sn
	I0116 02:34:12.035822  994955 round_trippers.go:469] Request Headers:
	I0116 02:34:12.035834  994955 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:34:12.035840  994955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:34:12.039030  994955 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 02:34:12.039066  994955 round_trippers.go:577] Response Headers:
	I0116 02:34:12.039075  994955 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:34:12 GMT
	I0116 02:34:12.039082  994955 round_trippers.go:580]     Audit-Id: 51e35523-ab7d-4941-af0d-d102a2e6ccb6
	I0116 02:34:12.039093  994955 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:34:12.039101  994955 round_trippers.go:580]     Content-Type: application/json
	I0116 02:34:12.039109  994955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:34:12.039117  994955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:34:12.039372  994955 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-965sn","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"a0898f09-1a64-4beb-bfbf-de15f2e07038","resourceVersion":"806","creationTimestamp":"2024-01-16T02:23:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a765b12d-1df2-4d20-a0f7-7471371f2fe7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:23:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a765b12d-1df2-4d20-a0f7-7471371f2fe7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I0116 02:34:12.039941  994955 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-835787
	I0116 02:34:12.039958  994955 round_trippers.go:469] Request Headers:
	I0116 02:34:12.039966  994955 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:34:12.039975  994955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:34:12.042663  994955 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:34:12.042683  994955 round_trippers.go:577] Response Headers:
	I0116 02:34:12.042692  994955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:34:12.042699  994955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:34:12.042712  994955 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:34:12 GMT
	I0116 02:34:12.042719  994955 round_trippers.go:580]     Audit-Id: 05283495-e34f-4cc1-a730-a3f148ec5123
	I0116 02:34:12.042728  994955 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:34:12.042740  994955 round_trippers.go:580]     Content-Type: application/json
	I0116 02:34:12.043019  994955 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-835787","uid":"7ae74749-584e-4cbc-92c4-0e5f2539761e","resourceVersion":"886","creationTimestamp":"2024-01-16T02:23:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-835787","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-835787","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_23_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:23:29Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0116 02:34:12.536366  994955 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-965sn
	I0116 02:34:12.536395  994955 round_trippers.go:469] Request Headers:
	I0116 02:34:12.536405  994955 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:34:12.536411  994955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:34:12.539451  994955 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 02:34:12.539484  994955 round_trippers.go:577] Response Headers:
	I0116 02:34:12.539495  994955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:34:12.539501  994955 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:34:12 GMT
	I0116 02:34:12.539524  994955 round_trippers.go:580]     Audit-Id: 96bc0c37-6465-4ab6-820f-afe5125d1c09
	I0116 02:34:12.539530  994955 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:34:12.539535  994955 round_trippers.go:580]     Content-Type: application/json
	I0116 02:34:12.539541  994955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:34:12.539886  994955 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-965sn","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"a0898f09-1a64-4beb-bfbf-de15f2e07038","resourceVersion":"806","creationTimestamp":"2024-01-16T02:23:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a765b12d-1df2-4d20-a0f7-7471371f2fe7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:23:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a765b12d-1df2-4d20-a0f7-7471371f2fe7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I0116 02:34:12.540478  994955 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-835787
	I0116 02:34:12.540496  994955 round_trippers.go:469] Request Headers:
	I0116 02:34:12.540503  994955 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:34:12.540509  994955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:34:12.543064  994955 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:34:12.543089  994955 round_trippers.go:577] Response Headers:
	I0116 02:34:12.543099  994955 round_trippers.go:580]     Content-Type: application/json
	I0116 02:34:12.543108  994955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:34:12.543123  994955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:34:12.543132  994955 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:34:12 GMT
	I0116 02:34:12.543140  994955 round_trippers.go:580]     Audit-Id: 768d5cc2-0a5f-4934-82a4-99b5bb40b363
	I0116 02:34:12.543151  994955 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:34:12.543311  994955 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-835787","uid":"7ae74749-584e-4cbc-92c4-0e5f2539761e","resourceVersion":"886","creationTimestamp":"2024-01-16T02:23:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-835787","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-835787","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_23_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:23:29Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0116 02:34:13.035937  994955 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-965sn
	I0116 02:34:13.035972  994955 round_trippers.go:469] Request Headers:
	I0116 02:34:13.035981  994955 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:34:13.035990  994955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:34:13.039045  994955 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 02:34:13.039069  994955 round_trippers.go:577] Response Headers:
	I0116 02:34:13.039077  994955 round_trippers.go:580]     Audit-Id: f36e7303-614e-48e8-9316-96837522c710
	I0116 02:34:13.039083  994955 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:34:13.039088  994955 round_trippers.go:580]     Content-Type: application/json
	I0116 02:34:13.039093  994955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:34:13.039098  994955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:34:13.039103  994955 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:34:13 GMT
	I0116 02:34:13.039853  994955 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-965sn","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"a0898f09-1a64-4beb-bfbf-de15f2e07038","resourceVersion":"806","creationTimestamp":"2024-01-16T02:23:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a765b12d-1df2-4d20-a0f7-7471371f2fe7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:23:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a765b12d-1df2-4d20-a0f7-7471371f2fe7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I0116 02:34:13.040364  994955 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-835787
	I0116 02:34:13.040380  994955 round_trippers.go:469] Request Headers:
	I0116 02:34:13.040388  994955 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:34:13.040394  994955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:34:13.042862  994955 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:34:13.042880  994955 round_trippers.go:577] Response Headers:
	I0116 02:34:13.042889  994955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:34:13.042895  994955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:34:13.042906  994955 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:34:13 GMT
	I0116 02:34:13.042914  994955 round_trippers.go:580]     Audit-Id: 81ae6c2f-52d2-4bc2-a6a8-92c9378bfefb
	I0116 02:34:13.042924  994955 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:34:13.042933  994955 round_trippers.go:580]     Content-Type: application/json
	I0116 02:34:13.043071  994955 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-835787","uid":"7ae74749-584e-4cbc-92c4-0e5f2539761e","resourceVersion":"886","creationTimestamp":"2024-01-16T02:23:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-835787","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-835787","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_23_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:23:29Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0116 02:34:13.535741  994955 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-965sn
	I0116 02:34:13.535778  994955 round_trippers.go:469] Request Headers:
	I0116 02:34:13.535790  994955 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:34:13.535800  994955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:34:13.538606  994955 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:34:13.538643  994955 round_trippers.go:577] Response Headers:
	I0116 02:34:13.538654  994955 round_trippers.go:580]     Content-Type: application/json
	I0116 02:34:13.538663  994955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:34:13.538670  994955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:34:13.538679  994955 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:34:13 GMT
	I0116 02:34:13.538686  994955 round_trippers.go:580]     Audit-Id: f88c06b4-9ebf-4432-a8b0-c23b56e4663e
	I0116 02:34:13.538693  994955 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:34:13.538954  994955 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-965sn","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"a0898f09-1a64-4beb-bfbf-de15f2e07038","resourceVersion":"806","creationTimestamp":"2024-01-16T02:23:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a765b12d-1df2-4d20-a0f7-7471371f2fe7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:23:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a765b12d-1df2-4d20-a0f7-7471371f2fe7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I0116 02:34:13.539506  994955 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-835787
	I0116 02:34:13.539529  994955 round_trippers.go:469] Request Headers:
	I0116 02:34:13.539540  994955 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:34:13.539549  994955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:34:13.542921  994955 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 02:34:13.542955  994955 round_trippers.go:577] Response Headers:
	I0116 02:34:13.542966  994955 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:34:13 GMT
	I0116 02:34:13.542974  994955 round_trippers.go:580]     Audit-Id: 34cbfee9-98ec-420e-8784-75edc2804c99
	I0116 02:34:13.542981  994955 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:34:13.542994  994955 round_trippers.go:580]     Content-Type: application/json
	I0116 02:34:13.543002  994955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:34:13.543019  994955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:34:13.543690  994955 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-835787","uid":"7ae74749-584e-4cbc-92c4-0e5f2539761e","resourceVersion":"886","creationTimestamp":"2024-01-16T02:23:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-835787","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-835787","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_23_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:23:29Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0116 02:34:13.544157  994955 pod_ready.go:102] pod "coredns-5dd5756b68-965sn" in "kube-system" namespace has status "Ready":"False"
	I0116 02:34:14.036389  994955 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-965sn
	I0116 02:34:14.036428  994955 round_trippers.go:469] Request Headers:
	I0116 02:34:14.036437  994955 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:34:14.036443  994955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:34:14.040004  994955 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 02:34:14.040035  994955 round_trippers.go:577] Response Headers:
	I0116 02:34:14.040042  994955 round_trippers.go:580]     Audit-Id: 7e8f9409-686d-46f4-bea2-2a560e7a856b
	I0116 02:34:14.040048  994955 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:34:14.040053  994955 round_trippers.go:580]     Content-Type: application/json
	I0116 02:34:14.040059  994955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:34:14.040064  994955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:34:14.040073  994955 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:34:14 GMT
	I0116 02:34:14.040267  994955 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-965sn","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"a0898f09-1a64-4beb-bfbf-de15f2e07038","resourceVersion":"806","creationTimestamp":"2024-01-16T02:23:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a765b12d-1df2-4d20-a0f7-7471371f2fe7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:23:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a765b12d-1df2-4d20-a0f7-7471371f2fe7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I0116 02:34:14.040809  994955 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-835787
	I0116 02:34:14.040828  994955 round_trippers.go:469] Request Headers:
	I0116 02:34:14.040836  994955 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:34:14.040846  994955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:34:14.042884  994955 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:34:14.042905  994955 round_trippers.go:577] Response Headers:
	I0116 02:34:14.042914  994955 round_trippers.go:580]     Audit-Id: f0a31a3b-bff1-438f-b4b7-08da512e9360
	I0116 02:34:14.042922  994955 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:34:14.042930  994955 round_trippers.go:580]     Content-Type: application/json
	I0116 02:34:14.042936  994955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:34:14.042944  994955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:34:14.042951  994955 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:34:14 GMT
	I0116 02:34:14.043107  994955 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-835787","uid":"7ae74749-584e-4cbc-92c4-0e5f2539761e","resourceVersion":"886","creationTimestamp":"2024-01-16T02:23:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-835787","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-835787","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_23_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:23:29Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0116 02:34:14.535963  994955 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-965sn
	I0116 02:34:14.535990  994955 round_trippers.go:469] Request Headers:
	I0116 02:34:14.535999  994955 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:34:14.536005  994955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:34:14.539137  994955 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 02:34:14.539168  994955 round_trippers.go:577] Response Headers:
	I0116 02:34:14.539178  994955 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:34:14 GMT
	I0116 02:34:14.539187  994955 round_trippers.go:580]     Audit-Id: 011d951f-a1bd-4734-b829-97dd649b4317
	I0116 02:34:14.539203  994955 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:34:14.539211  994955 round_trippers.go:580]     Content-Type: application/json
	I0116 02:34:14.539219  994955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:34:14.539227  994955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:34:14.539415  994955 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-965sn","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"a0898f09-1a64-4beb-bfbf-de15f2e07038","resourceVersion":"914","creationTimestamp":"2024-01-16T02:23:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a765b12d-1df2-4d20-a0f7-7471371f2fe7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:23:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a765b12d-1df2-4d20-a0f7-7471371f2fe7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6493 chars]
	I0116 02:34:14.539970  994955 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-835787
	I0116 02:34:14.539988  994955 round_trippers.go:469] Request Headers:
	I0116 02:34:14.539996  994955 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:34:14.540002  994955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:34:14.542141  994955 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:34:14.542164  994955 round_trippers.go:577] Response Headers:
	I0116 02:34:14.542174  994955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:34:14.542182  994955 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:34:14 GMT
	I0116 02:34:14.542190  994955 round_trippers.go:580]     Audit-Id: bc0e0e9c-5896-48e0-865a-3b483208aeb5
	I0116 02:34:14.542199  994955 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:34:14.542213  994955 round_trippers.go:580]     Content-Type: application/json
	I0116 02:34:14.542225  994955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:34:14.542474  994955 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-835787","uid":"7ae74749-584e-4cbc-92c4-0e5f2539761e","resourceVersion":"886","creationTimestamp":"2024-01-16T02:23:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-835787","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-835787","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_23_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:23:29Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0116 02:34:15.036261  994955 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-965sn
	I0116 02:34:15.036297  994955 round_trippers.go:469] Request Headers:
	I0116 02:34:15.036306  994955 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:34:15.036313  994955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:34:15.039387  994955 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 02:34:15.039414  994955 round_trippers.go:577] Response Headers:
	I0116 02:34:15.039425  994955 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:34:15.039433  994955 round_trippers.go:580]     Content-Type: application/json
	I0116 02:34:15.039440  994955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:34:15.039448  994955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:34:15.039457  994955 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:34:15 GMT
	I0116 02:34:15.039465  994955 round_trippers.go:580]     Audit-Id: 5e8dcb19-7633-4b10-8ec7-22197ff7bf1f
	I0116 02:34:15.039603  994955 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-965sn","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"a0898f09-1a64-4beb-bfbf-de15f2e07038","resourceVersion":"914","creationTimestamp":"2024-01-16T02:23:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a765b12d-1df2-4d20-a0f7-7471371f2fe7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:23:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a765b12d-1df2-4d20-a0f7-7471371f2fe7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6493 chars]
	I0116 02:34:15.040105  994955 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-835787
	I0116 02:34:15.040124  994955 round_trippers.go:469] Request Headers:
	I0116 02:34:15.040134  994955 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:34:15.040143  994955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:34:15.043859  994955 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 02:34:15.043882  994955 round_trippers.go:577] Response Headers:
	I0116 02:34:15.043892  994955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:34:15.043910  994955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:34:15.043921  994955 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:34:15 GMT
	I0116 02:34:15.043932  994955 round_trippers.go:580]     Audit-Id: c27ad175-c2c7-486b-8860-b0f4db69d278
	I0116 02:34:15.043941  994955 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:34:15.043951  994955 round_trippers.go:580]     Content-Type: application/json
	I0116 02:34:15.044237  994955 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-835787","uid":"7ae74749-584e-4cbc-92c4-0e5f2539761e","resourceVersion":"886","creationTimestamp":"2024-01-16T02:23:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-835787","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-835787","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_23_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:23:29Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0116 02:34:15.535586  994955 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-965sn
	I0116 02:34:15.535616  994955 round_trippers.go:469] Request Headers:
	I0116 02:34:15.535624  994955 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:34:15.535630  994955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:34:15.538930  994955 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 02:34:15.538963  994955 round_trippers.go:577] Response Headers:
	I0116 02:34:15.538974  994955 round_trippers.go:580]     Content-Type: application/json
	I0116 02:34:15.538982  994955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:34:15.538988  994955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:34:15.538998  994955 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:34:15 GMT
	I0116 02:34:15.539009  994955 round_trippers.go:580]     Audit-Id: 7c0f6c44-f639-40a1-88eb-9f751bff922d
	I0116 02:34:15.539019  994955 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:34:15.539447  994955 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-965sn","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"a0898f09-1a64-4beb-bfbf-de15f2e07038","resourceVersion":"922","creationTimestamp":"2024-01-16T02:23:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a765b12d-1df2-4d20-a0f7-7471371f2fe7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:23:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a765b12d-1df2-4d20-a0f7-7471371f2fe7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6264 chars]
	I0116 02:34:15.540087  994955 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-835787
	I0116 02:34:15.540107  994955 round_trippers.go:469] Request Headers:
	I0116 02:34:15.540118  994955 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:34:15.540127  994955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:34:15.542731  994955 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:34:15.542755  994955 round_trippers.go:577] Response Headers:
	I0116 02:34:15.542766  994955 round_trippers.go:580]     Audit-Id: 76dd3d25-406b-41df-9a9a-4b3c0a22e6de
	I0116 02:34:15.542778  994955 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:34:15.542791  994955 round_trippers.go:580]     Content-Type: application/json
	I0116 02:34:15.542801  994955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:34:15.542809  994955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:34:15.542816  994955 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:34:15 GMT
	I0116 02:34:15.543055  994955 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-835787","uid":"7ae74749-584e-4cbc-92c4-0e5f2539761e","resourceVersion":"886","creationTimestamp":"2024-01-16T02:23:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-835787","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-835787","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_23_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:23:29Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0116 02:34:15.543381  994955 pod_ready.go:92] pod "coredns-5dd5756b68-965sn" in "kube-system" namespace has status "Ready":"True"
	I0116 02:34:15.543401  994955 pod_ready.go:81] duration metric: took 8.508125556s waiting for pod "coredns-5dd5756b68-965sn" in "kube-system" namespace to be "Ready" ...
	I0116 02:34:15.543416  994955 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-835787" in "kube-system" namespace to be "Ready" ...
	I0116 02:34:15.543482  994955 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-835787
	I0116 02:34:15.543492  994955 round_trippers.go:469] Request Headers:
	I0116 02:34:15.543502  994955 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:34:15.543513  994955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:34:15.548250  994955 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0116 02:34:15.548279  994955 round_trippers.go:577] Response Headers:
	I0116 02:34:15.548289  994955 round_trippers.go:580]     Audit-Id: d2f56fa7-9c0c-40b7-8a0d-6e58f5d8b352
	I0116 02:34:15.548298  994955 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:34:15.548306  994955 round_trippers.go:580]     Content-Type: application/json
	I0116 02:34:15.548316  994955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:34:15.548325  994955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:34:15.548333  994955 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:34:15 GMT
	I0116 02:34:15.549046  994955 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-835787","namespace":"kube-system","uid":"ccb51de1-d565-42b0-bd30-9b1acb1c725d","resourceVersion":"879","creationTimestamp":"2024-01-16T02:23:33Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.50:2379","kubernetes.io/config.hash":"108085f55363e386b9f9c083ac579444","kubernetes.io/config.mirror":"108085f55363e386b9f9c083ac579444","kubernetes.io/config.seen":"2024-01-16T02:23:33.032941198Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-835787","uid":"7ae74749-584e-4cbc-92c4-0e5f2539761e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:23:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 5843 chars]
	I0116 02:34:15.549541  994955 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-835787
	I0116 02:34:15.549556  994955 round_trippers.go:469] Request Headers:
	I0116 02:34:15.549563  994955 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:34:15.549572  994955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:34:15.551938  994955 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:34:15.551962  994955 round_trippers.go:577] Response Headers:
	I0116 02:34:15.551971  994955 round_trippers.go:580]     Content-Type: application/json
	I0116 02:34:15.551979  994955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:34:15.551988  994955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:34:15.552001  994955 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:34:15 GMT
	I0116 02:34:15.552011  994955 round_trippers.go:580]     Audit-Id: 82abe5a5-0cc1-4552-ae8c-cac9d4a67121
	I0116 02:34:15.552021  994955 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:34:15.552174  994955 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-835787","uid":"7ae74749-584e-4cbc-92c4-0e5f2539761e","resourceVersion":"886","creationTimestamp":"2024-01-16T02:23:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-835787","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-835787","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_23_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:23:29Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0116 02:34:15.552632  994955 pod_ready.go:92] pod "etcd-multinode-835787" in "kube-system" namespace has status "Ready":"True"
	I0116 02:34:15.552658  994955 pod_ready.go:81] duration metric: took 9.232174ms waiting for pod "etcd-multinode-835787" in "kube-system" namespace to be "Ready" ...
	I0116 02:34:15.552682  994955 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-835787" in "kube-system" namespace to be "Ready" ...
	I0116 02:34:15.552765  994955 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-835787
	I0116 02:34:15.552774  994955 round_trippers.go:469] Request Headers:
	I0116 02:34:15.552781  994955 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:34:15.552787  994955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:34:15.556649  994955 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 02:34:15.556669  994955 round_trippers.go:577] Response Headers:
	I0116 02:34:15.556676  994955 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:34:15.556681  994955 round_trippers.go:580]     Content-Type: application/json
	I0116 02:34:15.556686  994955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:34:15.556692  994955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:34:15.556700  994955 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:34:15 GMT
	I0116 02:34:15.556706  994955 round_trippers.go:580]     Audit-Id: 09ed4036-0e89-47d5-a704-a0b5db3af131
	I0116 02:34:15.556860  994955 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-835787","namespace":"kube-system","uid":"9c26db11-7208-4540-8a73-407a6edd3a0b","resourceVersion":"893","creationTimestamp":"2024-01-16T02:23:33Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.50:8443","kubernetes.io/config.hash":"b27880b6b81ca11dc023b4901941ff6f","kubernetes.io/config.mirror":"b27880b6b81ca11dc023b4901941ff6f","kubernetes.io/config.seen":"2024-01-16T02:23:33.032945135Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-835787","uid":"7ae74749-584e-4cbc-92c4-0e5f2539761e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:23:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7380 chars]
	I0116 02:34:15.557297  994955 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-835787
	I0116 02:34:15.557310  994955 round_trippers.go:469] Request Headers:
	I0116 02:34:15.557317  994955 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:34:15.557323  994955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:34:15.559360  994955 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:34:15.559381  994955 round_trippers.go:577] Response Headers:
	I0116 02:34:15.559390  994955 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:34:15 GMT
	I0116 02:34:15.559399  994955 round_trippers.go:580]     Audit-Id: ac8bed37-3eb3-4e75-9d70-0f8c674545c7
	I0116 02:34:15.559406  994955 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:34:15.559413  994955 round_trippers.go:580]     Content-Type: application/json
	I0116 02:34:15.559424  994955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:34:15.559433  994955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:34:15.559812  994955 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-835787","uid":"7ae74749-584e-4cbc-92c4-0e5f2539761e","resourceVersion":"886","creationTimestamp":"2024-01-16T02:23:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-835787","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-835787","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_23_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:23:29Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0116 02:34:15.560113  994955 pod_ready.go:92] pod "kube-apiserver-multinode-835787" in "kube-system" namespace has status "Ready":"True"
	I0116 02:34:15.560127  994955 pod_ready.go:81] duration metric: took 7.434351ms waiting for pod "kube-apiserver-multinode-835787" in "kube-system" namespace to be "Ready" ...
	I0116 02:34:15.560137  994955 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-835787" in "kube-system" namespace to be "Ready" ...
	I0116 02:34:15.560186  994955 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-835787
	I0116 02:34:15.560193  994955 round_trippers.go:469] Request Headers:
	I0116 02:34:15.560201  994955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:34:15.560207  994955 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:34:15.562216  994955 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0116 02:34:15.562234  994955 round_trippers.go:577] Response Headers:
	I0116 02:34:15.562242  994955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:34:15.562251  994955 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:34:15 GMT
	I0116 02:34:15.562259  994955 round_trippers.go:580]     Audit-Id: c28c700f-2f77-41e9-a82a-9e1e3ba8b82c
	I0116 02:34:15.562269  994955 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:34:15.562282  994955 round_trippers.go:580]     Content-Type: application/json
	I0116 02:34:15.562294  994955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:34:15.562636  994955 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-835787","namespace":"kube-system","uid":"daf9e312-54ad-4a4e-b334-9b84e55f8fef","resourceVersion":"885","creationTimestamp":"2024-01-16T02:23:33Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"6adb137abb6e7ac4dcf8e50e41a3773b","kubernetes.io/config.mirror":"6adb137abb6e7ac4dcf8e50e41a3773b","kubernetes.io/config.seen":"2024-01-16T02:23:33.032946146Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-835787","uid":"7ae74749-584e-4cbc-92c4-0e5f2539761e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:23:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6950 chars]
	I0116 02:34:15.563038  994955 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-835787
	I0116 02:34:15.563053  994955 round_trippers.go:469] Request Headers:
	I0116 02:34:15.563065  994955 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:34:15.563074  994955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:34:15.564911  994955 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0116 02:34:15.564924  994955 round_trippers.go:577] Response Headers:
	I0116 02:34:15.564930  994955 round_trippers.go:580]     Content-Type: application/json
	I0116 02:34:15.564935  994955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:34:15.564941  994955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:34:15.564949  994955 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:34:15 GMT
	I0116 02:34:15.564954  994955 round_trippers.go:580]     Audit-Id: c73e160c-1535-4c03-9c4b-add1d29dc133
	I0116 02:34:15.564960  994955 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:34:15.565106  994955 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-835787","uid":"7ae74749-584e-4cbc-92c4-0e5f2539761e","resourceVersion":"886","creationTimestamp":"2024-01-16T02:23:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-835787","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-835787","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_23_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:23:29Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0116 02:34:15.565436  994955 pod_ready.go:92] pod "kube-controller-manager-multinode-835787" in "kube-system" namespace has status "Ready":"True"
	I0116 02:34:15.565456  994955 pod_ready.go:81] duration metric: took 5.312623ms waiting for pod "kube-controller-manager-multinode-835787" in "kube-system" namespace to be "Ready" ...
	I0116 02:34:15.565467  994955 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fpdqr" in "kube-system" namespace to be "Ready" ...
	I0116 02:34:15.565524  994955 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fpdqr
	I0116 02:34:15.565531  994955 round_trippers.go:469] Request Headers:
	I0116 02:34:15.565538  994955 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:34:15.565545  994955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:34:15.568521  994955 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:34:15.568567  994955 round_trippers.go:577] Response Headers:
	I0116 02:34:15.568579  994955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:34:15.568589  994955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:34:15.568600  994955 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:34:15 GMT
	I0116 02:34:15.568609  994955 round_trippers.go:580]     Audit-Id: 79cb3533-8ca6-42e5-b704-76eb10e199ba
	I0116 02:34:15.568617  994955 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:34:15.568629  994955 round_trippers.go:580]     Content-Type: application/json
	I0116 02:34:15.568756  994955 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-fpdqr","generateName":"kube-proxy-","namespace":"kube-system","uid":"42b74cbd-93d8-4ac7-9071-112d5e7c572b","resourceVersion":"733","creationTimestamp":"2024-01-16T02:25:22Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"c1b2dbfd-aa15-48f4-b587-dbae275fb5c5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:25:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c1b2dbfd-aa15-48f4-b587-dbae275fb5c5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5526 chars]
	I0116 02:34:15.569289  994955 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-835787-m03
	I0116 02:34:15.569308  994955 round_trippers.go:469] Request Headers:
	I0116 02:34:15.569319  994955 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:34:15.569328  994955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:34:15.571321  994955 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0116 02:34:15.571339  994955 round_trippers.go:577] Response Headers:
	I0116 02:34:15.571345  994955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:34:15.571351  994955 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:34:15 GMT
	I0116 02:34:15.571356  994955 round_trippers.go:580]     Audit-Id: 62c057e0-266d-4548-9f18-8c39a5227278
	I0116 02:34:15.571365  994955 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:34:15.571372  994955 round_trippers.go:580]     Content-Type: application/json
	I0116 02:34:15.571384  994955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:34:15.571489  994955 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-835787-m03","uid":"67df5a31-bd76-4643-b628-d7570878cf19","resourceVersion":"899","creationTimestamp":"2024-01-16T02:26:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-835787-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-835787","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T02_26_05_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:26:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 3965 chars]
	I0116 02:34:15.571746  994955 pod_ready.go:92] pod "kube-proxy-fpdqr" in "kube-system" namespace has status "Ready":"True"
	I0116 02:34:15.571774  994955 pod_ready.go:81] duration metric: took 6.297114ms waiting for pod "kube-proxy-fpdqr" in "kube-system" namespace to be "Ready" ...
	I0116 02:34:15.571787  994955 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-gbvc2" in "kube-system" namespace to be "Ready" ...
	I0116 02:34:15.736197  994955 request.go:629] Waited for 164.333324ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gbvc2
	I0116 02:34:15.736284  994955 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gbvc2
	I0116 02:34:15.736296  994955 round_trippers.go:469] Request Headers:
	I0116 02:34:15.736310  994955 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:34:15.736336  994955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:34:15.739291  994955 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:34:15.739316  994955 round_trippers.go:577] Response Headers:
	I0116 02:34:15.739324  994955 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:34:15 GMT
	I0116 02:34:15.739330  994955 round_trippers.go:580]     Audit-Id: 0338939c-b14f-4bbe-978b-1a262dc2e13b
	I0116 02:34:15.739335  994955 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:34:15.739340  994955 round_trippers.go:580]     Content-Type: application/json
	I0116 02:34:15.739347  994955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:34:15.739353  994955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:34:15.739499  994955 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-gbvc2","generateName":"kube-proxy-","namespace":"kube-system","uid":"74d63696-cb46-484d-937b-8883e6f1df06","resourceVersion":"824","creationTimestamp":"2024-01-16T02:23:45Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"c1b2dbfd-aa15-48f4-b587-dbae275fb5c5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:23:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c1b2dbfd-aa15-48f4-b587-dbae275fb5c5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5514 chars]
	I0116 02:34:15.936344  994955 request.go:629] Waited for 196.383652ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.50:8443/api/v1/nodes/multinode-835787
	I0116 02:34:15.936440  994955 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-835787
	I0116 02:34:15.936449  994955 round_trippers.go:469] Request Headers:
	I0116 02:34:15.936460  994955 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:34:15.936474  994955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:34:15.939823  994955 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 02:34:15.939847  994955 round_trippers.go:577] Response Headers:
	I0116 02:34:15.939856  994955 round_trippers.go:580]     Audit-Id: 2d2ff1b3-1c53-401c-8c2c-18fbbf6e9a60
	I0116 02:34:15.939861  994955 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:34:15.939867  994955 round_trippers.go:580]     Content-Type: application/json
	I0116 02:34:15.939872  994955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:34:15.939878  994955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:34:15.939883  994955 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:34:15 GMT
	I0116 02:34:15.940065  994955 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-835787","uid":"7ae74749-584e-4cbc-92c4-0e5f2539761e","resourceVersion":"886","creationTimestamp":"2024-01-16T02:23:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-835787","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-835787","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_23_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:23:29Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0116 02:34:15.940400  994955 pod_ready.go:92] pod "kube-proxy-gbvc2" in "kube-system" namespace has status "Ready":"True"
	I0116 02:34:15.940420  994955 pod_ready.go:81] duration metric: took 368.626005ms waiting for pod "kube-proxy-gbvc2" in "kube-system" namespace to be "Ready" ...
	I0116 02:34:15.940433  994955 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hxx8p" in "kube-system" namespace to be "Ready" ...
	I0116 02:34:16.136372  994955 request.go:629] Waited for 195.853852ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hxx8p
	I0116 02:34:16.136455  994955 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hxx8p
	I0116 02:34:16.136462  994955 round_trippers.go:469] Request Headers:
	I0116 02:34:16.136472  994955 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:34:16.136482  994955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:34:16.141850  994955 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0116 02:34:16.141874  994955 round_trippers.go:577] Response Headers:
	I0116 02:34:16.141884  994955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:34:16.141891  994955 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:34:16 GMT
	I0116 02:34:16.141899  994955 round_trippers.go:580]     Audit-Id: 9a253b21-b59a-4df6-87dc-ea0c5ac52580
	I0116 02:34:16.141906  994955 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:34:16.141913  994955 round_trippers.go:580]     Content-Type: application/json
	I0116 02:34:16.141923  994955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:34:16.142414  994955 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-hxx8p","generateName":"kube-proxy-","namespace":"kube-system","uid":"9c35aa68-14ac-41e1-81f8-8fdb0c48d9f1","resourceVersion":"525","creationTimestamp":"2024-01-16T02:24:32Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"c1b2dbfd-aa15-48f4-b587-dbae275fb5c5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:24:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c1b2dbfd-aa15-48f4-b587-dbae275fb5c5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5522 chars]
	I0116 02:34:16.336270  994955 request.go:629] Waited for 193.365617ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.50:8443/api/v1/nodes/multinode-835787-m02
	I0116 02:34:16.336347  994955 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-835787-m02
	I0116 02:34:16.336352  994955 round_trippers.go:469] Request Headers:
	I0116 02:34:16.336360  994955 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:34:16.336366  994955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:34:16.339486  994955 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 02:34:16.339506  994955 round_trippers.go:577] Response Headers:
	I0116 02:34:16.339514  994955 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:34:16 GMT
	I0116 02:34:16.339521  994955 round_trippers.go:580]     Audit-Id: caf7fcb1-04be-4b27-91c5-1f1682250e22
	I0116 02:34:16.339529  994955 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:34:16.339537  994955 round_trippers.go:580]     Content-Type: application/json
	I0116 02:34:16.339546  994955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:34:16.339555  994955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:34:16.339878  994955 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-835787-m02","uid":"c989eb4c-b30d-4969-ac6c-b1c11d2d8a5f","resourceVersion":"873","creationTimestamp":"2024-01-16T02:24:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-835787-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-835787","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T02_26_05_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:24:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 4235 chars]
	I0116 02:34:16.340184  994955 pod_ready.go:92] pod "kube-proxy-hxx8p" in "kube-system" namespace has status "Ready":"True"
	I0116 02:34:16.340203  994955 pod_ready.go:81] duration metric: took 399.761795ms waiting for pod "kube-proxy-hxx8p" in "kube-system" namespace to be "Ready" ...
	I0116 02:34:16.340216  994955 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-835787" in "kube-system" namespace to be "Ready" ...
	I0116 02:34:16.536459  994955 request.go:629] Waited for 196.137032ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-835787
	I0116 02:34:16.536543  994955 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-835787
	I0116 02:34:16.536551  994955 round_trippers.go:469] Request Headers:
	I0116 02:34:16.536563  994955 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:34:16.536579  994955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:34:16.539406  994955 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:34:16.539436  994955 round_trippers.go:577] Response Headers:
	I0116 02:34:16.539449  994955 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:34:16.539457  994955 round_trippers.go:580]     Content-Type: application/json
	I0116 02:34:16.539464  994955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:34:16.539471  994955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:34:16.539478  994955 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:34:16 GMT
	I0116 02:34:16.539484  994955 round_trippers.go:580]     Audit-Id: 581a7fea-7dd2-4757-8832-2c9b5f9849b2
	I0116 02:34:16.539609  994955 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-835787","namespace":"kube-system","uid":"7b9c28cc-6e78-413a-af72-511714d2462e","resourceVersion":"908","creationTimestamp":"2024-01-16T02:23:33Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"230f2dad53142209ac2ae48ed27aa7b4","kubernetes.io/config.mirror":"230f2dad53142209ac2ae48ed27aa7b4","kubernetes.io/config.seen":"2024-01-16T02:23:33.032947019Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-835787","uid":"7ae74749-584e-4cbc-92c4-0e5f2539761e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:23:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4680 chars]
	I0116 02:34:16.736432  994955 request.go:629] Waited for 196.417867ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.50:8443/api/v1/nodes/multinode-835787
	I0116 02:34:16.736525  994955 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-835787
	I0116 02:34:16.736533  994955 round_trippers.go:469] Request Headers:
	I0116 02:34:16.736549  994955 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:34:16.736564  994955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:34:16.739441  994955 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:34:16.739464  994955 round_trippers.go:577] Response Headers:
	I0116 02:34:16.739471  994955 round_trippers.go:580]     Content-Type: application/json
	I0116 02:34:16.739477  994955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:34:16.739483  994955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:34:16.739492  994955 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:34:16 GMT
	I0116 02:34:16.739501  994955 round_trippers.go:580]     Audit-Id: 3b978336-af06-420a-8245-73c0503d21c0
	I0116 02:34:16.739509  994955 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:34:16.739908  994955 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-835787","uid":"7ae74749-584e-4cbc-92c4-0e5f2539761e","resourceVersion":"886","creationTimestamp":"2024-01-16T02:23:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-835787","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-835787","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_23_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:23:29Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0116 02:34:16.740230  994955 pod_ready.go:92] pod "kube-scheduler-multinode-835787" in "kube-system" namespace has status "Ready":"True"
	I0116 02:34:16.740250  994955 pod_ready.go:81] duration metric: took 400.022098ms waiting for pod "kube-scheduler-multinode-835787" in "kube-system" namespace to be "Ready" ...
	I0116 02:34:16.740261  994955 pod_ready.go:38] duration metric: took 9.715678113s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 02:34:16.740284  994955 api_server.go:52] waiting for apiserver process to appear ...
	I0116 02:34:16.740349  994955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 02:34:16.756920  994955 command_runner.go:130] > 1089
	I0116 02:34:16.757145  994955 api_server.go:72] duration metric: took 15.339054247s to wait for apiserver process to appear ...
	I0116 02:34:16.757166  994955 api_server.go:88] waiting for apiserver healthz status ...
	I0116 02:34:16.757189  994955 api_server.go:253] Checking apiserver healthz at https://192.168.39.50:8443/healthz ...
	I0116 02:34:16.763166  994955 api_server.go:279] https://192.168.39.50:8443/healthz returned 200:
	ok
	I0116 02:34:16.763262  994955 round_trippers.go:463] GET https://192.168.39.50:8443/version
	I0116 02:34:16.763275  994955 round_trippers.go:469] Request Headers:
	I0116 02:34:16.763287  994955 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:34:16.763299  994955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:34:16.764241  994955 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0116 02:34:16.764260  994955 round_trippers.go:577] Response Headers:
	I0116 02:34:16.764269  994955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:34:16.764278  994955 round_trippers.go:580]     Content-Length: 264
	I0116 02:34:16.764286  994955 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:34:16 GMT
	I0116 02:34:16.764295  994955 round_trippers.go:580]     Audit-Id: 43928c1b-0584-496e-9976-975ee96e4792
	I0116 02:34:16.764305  994955 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:34:16.764319  994955 round_trippers.go:580]     Content-Type: application/json
	I0116 02:34:16.764328  994955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:34:16.764462  994955 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0116 02:34:16.764533  994955 api_server.go:141] control plane version: v1.28.4
	I0116 02:34:16.764552  994955 api_server.go:131] duration metric: took 7.379153ms to wait for apiserver health ...
	I0116 02:34:16.764565  994955 system_pods.go:43] waiting for kube-system pods to appear ...
	I0116 02:34:16.936067  994955 request.go:629] Waited for 171.412631ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods
	I0116 02:34:16.936147  994955 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods
	I0116 02:34:16.936156  994955 round_trippers.go:469] Request Headers:
	I0116 02:34:16.936165  994955 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:34:16.936177  994955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:34:16.944120  994955 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0116 02:34:16.944150  994955 round_trippers.go:577] Response Headers:
	I0116 02:34:16.944161  994955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:34:16.944170  994955 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:34:16 GMT
	I0116 02:34:16.944177  994955 round_trippers.go:580]     Audit-Id: 8298f878-ac6b-499a-8268-c58cb87a4206
	I0116 02:34:16.944185  994955 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:34:16.944197  994955 round_trippers.go:580]     Content-Type: application/json
	I0116 02:34:16.944210  994955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:34:16.945489  994955 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"927"},"items":[{"metadata":{"name":"coredns-5dd5756b68-965sn","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"a0898f09-1a64-4beb-bfbf-de15f2e07038","resourceVersion":"922","creationTimestamp":"2024-01-16T02:23:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a765b12d-1df2-4d20-a0f7-7471371f2fe7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:23:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a765b12d-1df2-4d20-a0f7-7471371f2fe7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 81838 chars]
	I0116 02:34:16.948853  994955 system_pods.go:59] 12 kube-system pods found
	I0116 02:34:16.948885  994955 system_pods.go:61] "coredns-5dd5756b68-965sn" [a0898f09-1a64-4beb-bfbf-de15f2e07038] Running
	I0116 02:34:16.948891  994955 system_pods.go:61] "etcd-multinode-835787" [ccb51de1-d565-42b0-bd30-9b1acb1c725d] Running
	I0116 02:34:16.948896  994955 system_pods.go:61] "kindnet-755b9" [ee1ea8c4-abfe-4fea-9f71-32840f6790ed] Running
	I0116 02:34:16.948904  994955 system_pods.go:61] "kindnet-hrsvh" [7ff7f33b-72a7-47b1-b4a9-bbbdad91e0d9] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0116 02:34:16.948912  994955 system_pods.go:61] "kindnet-nllfm" [faff798d-63d5-440d-a8f5-1f8d52ab7282] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0116 02:34:16.948917  994955 system_pods.go:61] "kube-apiserver-multinode-835787" [9c26db11-7208-4540-8a73-407a6edd3a0b] Running
	I0116 02:34:16.948924  994955 system_pods.go:61] "kube-controller-manager-multinode-835787" [daf9e312-54ad-4a4e-b334-9b84e55f8fef] Running
	I0116 02:34:16.948928  994955 system_pods.go:61] "kube-proxy-fpdqr" [42b74cbd-93d8-4ac7-9071-112d5e7c572b] Running
	I0116 02:34:16.948933  994955 system_pods.go:61] "kube-proxy-gbvc2" [74d63696-cb46-484d-937b-8883e6f1df06] Running
	I0116 02:34:16.948937  994955 system_pods.go:61] "kube-proxy-hxx8p" [9c35aa68-14ac-41e1-81f8-8fdb0c48d9f1] Running
	I0116 02:34:16.948942  994955 system_pods.go:61] "kube-scheduler-multinode-835787" [7b9c28cc-6e78-413a-af72-511714d2462e] Running
	I0116 02:34:16.948947  994955 system_pods.go:61] "storage-provisioner" [2d18fde8-ca44-4257-8475-100cd8b34ef8] Running
	I0116 02:34:16.948955  994955 system_pods.go:74] duration metric: took 184.380977ms to wait for pod list to return data ...
	I0116 02:34:16.948967  994955 default_sa.go:34] waiting for default service account to be created ...
	I0116 02:34:17.136428  994955 request.go:629] Waited for 187.361416ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.50:8443/api/v1/namespaces/default/serviceaccounts
	I0116 02:34:17.136510  994955 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/namespaces/default/serviceaccounts
	I0116 02:34:17.136518  994955 round_trippers.go:469] Request Headers:
	I0116 02:34:17.136530  994955 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:34:17.136548  994955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:34:17.139777  994955 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 02:34:17.139805  994955 round_trippers.go:577] Response Headers:
	I0116 02:34:17.139815  994955 round_trippers.go:580]     Audit-Id: 44952b9a-45c9-429b-8c72-8a28930d7256
	I0116 02:34:17.139823  994955 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:34:17.139831  994955 round_trippers.go:580]     Content-Type: application/json
	I0116 02:34:17.139845  994955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:34:17.139857  994955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:34:17.139865  994955 round_trippers.go:580]     Content-Length: 261
	I0116 02:34:17.139882  994955 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:34:17 GMT
	I0116 02:34:17.139918  994955 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"927"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"4cc2ba47-febe-498a-9316-a228f833a1cc","resourceVersion":"346","creationTimestamp":"2024-01-16T02:23:45Z"}}]}
	I0116 02:34:17.140132  994955 default_sa.go:45] found service account: "default"
	I0116 02:34:17.140157  994955 default_sa.go:55] duration metric: took 191.182281ms for default service account to be created ...
	I0116 02:34:17.140171  994955 system_pods.go:116] waiting for k8s-apps to be running ...
	I0116 02:34:17.336579  994955 request.go:629] Waited for 196.338601ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods
	I0116 02:34:17.336648  994955 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods
	I0116 02:34:17.336672  994955 round_trippers.go:469] Request Headers:
	I0116 02:34:17.336689  994955 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:34:17.336700  994955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:34:17.341019  994955 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0116 02:34:17.341047  994955 round_trippers.go:577] Response Headers:
	I0116 02:34:17.341055  994955 round_trippers.go:580]     Content-Type: application/json
	I0116 02:34:17.341061  994955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:34:17.341066  994955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:34:17.341072  994955 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:34:17 GMT
	I0116 02:34:17.341077  994955 round_trippers.go:580]     Audit-Id: ae049778-7ddf-4ecf-9118-ee46732b51e2
	I0116 02:34:17.341083  994955 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:34:17.342654  994955 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"929"},"items":[{"metadata":{"name":"coredns-5dd5756b68-965sn","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"a0898f09-1a64-4beb-bfbf-de15f2e07038","resourceVersion":"922","creationTimestamp":"2024-01-16T02:23:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a765b12d-1df2-4d20-a0f7-7471371f2fe7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:23:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a765b12d-1df2-4d20-a0f7-7471371f2fe7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 81838 chars]
	I0116 02:34:17.345104  994955 system_pods.go:86] 12 kube-system pods found
	I0116 02:34:17.345133  994955 system_pods.go:89] "coredns-5dd5756b68-965sn" [a0898f09-1a64-4beb-bfbf-de15f2e07038] Running
	I0116 02:34:17.345141  994955 system_pods.go:89] "etcd-multinode-835787" [ccb51de1-d565-42b0-bd30-9b1acb1c725d] Running
	I0116 02:34:17.345149  994955 system_pods.go:89] "kindnet-755b9" [ee1ea8c4-abfe-4fea-9f71-32840f6790ed] Running
	I0116 02:34:17.345159  994955 system_pods.go:89] "kindnet-hrsvh" [7ff7f33b-72a7-47b1-b4a9-bbbdad91e0d9] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0116 02:34:17.345169  994955 system_pods.go:89] "kindnet-nllfm" [faff798d-63d5-440d-a8f5-1f8d52ab7282] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0116 02:34:17.345178  994955 system_pods.go:89] "kube-apiserver-multinode-835787" [9c26db11-7208-4540-8a73-407a6edd3a0b] Running
	I0116 02:34:17.345190  994955 system_pods.go:89] "kube-controller-manager-multinode-835787" [daf9e312-54ad-4a4e-b334-9b84e55f8fef] Running
	I0116 02:34:17.345199  994955 system_pods.go:89] "kube-proxy-fpdqr" [42b74cbd-93d8-4ac7-9071-112d5e7c572b] Running
	I0116 02:34:17.345209  994955 system_pods.go:89] "kube-proxy-gbvc2" [74d63696-cb46-484d-937b-8883e6f1df06] Running
	I0116 02:34:17.345217  994955 system_pods.go:89] "kube-proxy-hxx8p" [9c35aa68-14ac-41e1-81f8-8fdb0c48d9f1] Running
	I0116 02:34:17.345227  994955 system_pods.go:89] "kube-scheduler-multinode-835787" [7b9c28cc-6e78-413a-af72-511714d2462e] Running
	I0116 02:34:17.345237  994955 system_pods.go:89] "storage-provisioner" [2d18fde8-ca44-4257-8475-100cd8b34ef8] Running
	I0116 02:34:17.345248  994955 system_pods.go:126] duration metric: took 205.06709ms to wait for k8s-apps to be running ...
	I0116 02:34:17.345261  994955 system_svc.go:44] waiting for kubelet service to be running ....
	I0116 02:34:17.345320  994955 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 02:34:17.360360  994955 system_svc.go:56] duration metric: took 15.091873ms WaitForService to wait for kubelet.
	I0116 02:34:17.360395  994955 kubeadm.go:581] duration metric: took 15.942307663s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0116 02:34:17.360421  994955 node_conditions.go:102] verifying NodePressure condition ...
	I0116 02:34:17.535800  994955 request.go:629] Waited for 175.29733ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.50:8443/api/v1/nodes
	I0116 02:34:17.535880  994955 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes
	I0116 02:34:17.535884  994955 round_trippers.go:469] Request Headers:
	I0116 02:34:17.535893  994955 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:34:17.535899  994955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:34:17.541574  994955 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0116 02:34:17.541605  994955 round_trippers.go:577] Response Headers:
	I0116 02:34:17.541615  994955 round_trippers.go:580]     Audit-Id: 91366efa-c6c3-4c41-aee6-b023df70fda7
	I0116 02:34:17.541629  994955 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:34:17.541634  994955 round_trippers.go:580]     Content-Type: application/json
	I0116 02:34:17.541640  994955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:34:17.541645  994955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:34:17.541650  994955 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:34:17 GMT
	I0116 02:34:17.542385  994955 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"930"},"items":[{"metadata":{"name":"multinode-835787","uid":"7ae74749-584e-4cbc-92c4-0e5f2539761e","resourceVersion":"886","creationTimestamp":"2024-01-16T02:23:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-835787","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-835787","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_23_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 16178 chars]
	I0116 02:34:17.543038  994955 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 02:34:17.543062  994955 node_conditions.go:123] node cpu capacity is 2
	I0116 02:34:17.543074  994955 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 02:34:17.543079  994955 node_conditions.go:123] node cpu capacity is 2
	I0116 02:34:17.543084  994955 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 02:34:17.543090  994955 node_conditions.go:123] node cpu capacity is 2
	I0116 02:34:17.543098  994955 node_conditions.go:105] duration metric: took 182.670764ms to run NodePressure ...
	I0116 02:34:17.543117  994955 start.go:228] waiting for startup goroutines ...
	I0116 02:34:17.543130  994955 start.go:233] waiting for cluster config update ...
	I0116 02:34:17.543136  994955 start.go:242] writing updated cluster config ...
	I0116 02:34:17.543606  994955 config.go:182] Loaded profile config "multinode-835787": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 02:34:17.543709  994955 profile.go:148] Saving config to /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/multinode-835787/config.json ...
	I0116 02:34:17.547400  994955 out.go:177] * Starting worker node multinode-835787-m02 in cluster multinode-835787
	I0116 02:34:17.548805  994955 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0116 02:34:17.548834  994955 cache.go:56] Caching tarball of preloaded images
	I0116 02:34:17.548956  994955 preload.go:174] Found /home/jenkins/minikube-integration/17967-971255/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0116 02:34:17.548974  994955 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0116 02:34:17.549120  994955 profile.go:148] Saving config to /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/multinode-835787/config.json ...
	I0116 02:34:17.549336  994955 start.go:365] acquiring machines lock for multinode-835787-m02: {Name:mk61734672272adcae1bdaf20e72828e69d219db Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0116 02:34:17.549398  994955 start.go:369] acquired machines lock for "multinode-835787-m02" in 38.803µs
	I0116 02:34:17.549419  994955 start.go:96] Skipping create...Using existing machine configuration
	I0116 02:34:17.549429  994955 fix.go:54] fixHost starting: m02
	I0116 02:34:17.549787  994955 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 02:34:17.549850  994955 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:34:17.565010  994955 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35057
	I0116 02:34:17.565536  994955 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:34:17.566086  994955 main.go:141] libmachine: Using API Version  1
	I0116 02:34:17.566112  994955 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:34:17.566464  994955 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:34:17.566707  994955 main.go:141] libmachine: (multinode-835787-m02) Calling .DriverName
	I0116 02:34:17.566865  994955 main.go:141] libmachine: (multinode-835787-m02) Calling .GetState
	I0116 02:34:17.568427  994955 fix.go:102] recreateIfNeeded on multinode-835787-m02: state=Running err=<nil>
	W0116 02:34:17.568444  994955 fix.go:128] unexpected machine state, will restart: <nil>
	I0116 02:34:17.570783  994955 out.go:177] * Updating the running kvm2 "multinode-835787-m02" VM ...
	I0116 02:34:17.572363  994955 machine.go:88] provisioning docker machine ...
	I0116 02:34:17.572392  994955 main.go:141] libmachine: (multinode-835787-m02) Calling .DriverName
	I0116 02:34:17.572663  994955 main.go:141] libmachine: (multinode-835787-m02) Calling .GetMachineName
	I0116 02:34:17.572848  994955 buildroot.go:166] provisioning hostname "multinode-835787-m02"
	I0116 02:34:17.572868  994955 main.go:141] libmachine: (multinode-835787-m02) Calling .GetMachineName
	I0116 02:34:17.573013  994955 main.go:141] libmachine: (multinode-835787-m02) Calling .GetSSHHostname
	I0116 02:34:17.575681  994955 main.go:141] libmachine: (multinode-835787-m02) DBG | domain multinode-835787-m02 has defined MAC address 52:54:00:83:d4:5b in network mk-multinode-835787
	I0116 02:34:17.576207  994955 main.go:141] libmachine: (multinode-835787-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:d4:5b", ip: ""} in network mk-multinode-835787: {Iface:virbr1 ExpiryTime:2024-01-16 03:24:11 +0000 UTC Type:0 Mac:52:54:00:83:d4:5b Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:multinode-835787-m02 Clientid:01:52:54:00:83:d4:5b}
	I0116 02:34:17.576242  994955 main.go:141] libmachine: (multinode-835787-m02) DBG | domain multinode-835787-m02 has defined IP address 192.168.39.15 and MAC address 52:54:00:83:d4:5b in network mk-multinode-835787
	I0116 02:34:17.576386  994955 main.go:141] libmachine: (multinode-835787-m02) Calling .GetSSHPort
	I0116 02:34:17.576576  994955 main.go:141] libmachine: (multinode-835787-m02) Calling .GetSSHKeyPath
	I0116 02:34:17.576748  994955 main.go:141] libmachine: (multinode-835787-m02) Calling .GetSSHKeyPath
	I0116 02:34:17.576887  994955 main.go:141] libmachine: (multinode-835787-m02) Calling .GetSSHUsername
	I0116 02:34:17.577062  994955 main.go:141] libmachine: Using SSH client type: native
	I0116 02:34:17.577380  994955 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.15 22 <nil> <nil>}
	I0116 02:34:17.577393  994955 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-835787-m02 && echo "multinode-835787-m02" | sudo tee /etc/hostname
	I0116 02:34:17.728967  994955 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-835787-m02
	
	I0116 02:34:17.728996  994955 main.go:141] libmachine: (multinode-835787-m02) Calling .GetSSHHostname
	I0116 02:34:17.731885  994955 main.go:141] libmachine: (multinode-835787-m02) DBG | domain multinode-835787-m02 has defined MAC address 52:54:00:83:d4:5b in network mk-multinode-835787
	I0116 02:34:17.732312  994955 main.go:141] libmachine: (multinode-835787-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:d4:5b", ip: ""} in network mk-multinode-835787: {Iface:virbr1 ExpiryTime:2024-01-16 03:24:11 +0000 UTC Type:0 Mac:52:54:00:83:d4:5b Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:multinode-835787-m02 Clientid:01:52:54:00:83:d4:5b}
	I0116 02:34:17.732350  994955 main.go:141] libmachine: (multinode-835787-m02) DBG | domain multinode-835787-m02 has defined IP address 192.168.39.15 and MAC address 52:54:00:83:d4:5b in network mk-multinode-835787
	I0116 02:34:17.732508  994955 main.go:141] libmachine: (multinode-835787-m02) Calling .GetSSHPort
	I0116 02:34:17.732754  994955 main.go:141] libmachine: (multinode-835787-m02) Calling .GetSSHKeyPath
	I0116 02:34:17.732971  994955 main.go:141] libmachine: (multinode-835787-m02) Calling .GetSSHKeyPath
	I0116 02:34:17.733148  994955 main.go:141] libmachine: (multinode-835787-m02) Calling .GetSSHUsername
	I0116 02:34:17.733358  994955 main.go:141] libmachine: Using SSH client type: native
	I0116 02:34:17.733735  994955 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.15 22 <nil> <nil>}
	I0116 02:34:17.733761  994955 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-835787-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-835787-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-835787-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0116 02:34:17.867079  994955 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0116 02:34:17.867117  994955 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17967-971255/.minikube CaCertPath:/home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17967-971255/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17967-971255/.minikube}
	I0116 02:34:17.867136  994955 buildroot.go:174] setting up certificates
	I0116 02:34:17.867144  994955 provision.go:83] configureAuth start
	I0116 02:34:17.867154  994955 main.go:141] libmachine: (multinode-835787-m02) Calling .GetMachineName
	I0116 02:34:17.867495  994955 main.go:141] libmachine: (multinode-835787-m02) Calling .GetIP
	I0116 02:34:17.870171  994955 main.go:141] libmachine: (multinode-835787-m02) DBG | domain multinode-835787-m02 has defined MAC address 52:54:00:83:d4:5b in network mk-multinode-835787
	I0116 02:34:17.870622  994955 main.go:141] libmachine: (multinode-835787-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:d4:5b", ip: ""} in network mk-multinode-835787: {Iface:virbr1 ExpiryTime:2024-01-16 03:24:11 +0000 UTC Type:0 Mac:52:54:00:83:d4:5b Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:multinode-835787-m02 Clientid:01:52:54:00:83:d4:5b}
	I0116 02:34:17.870663  994955 main.go:141] libmachine: (multinode-835787-m02) DBG | domain multinode-835787-m02 has defined IP address 192.168.39.15 and MAC address 52:54:00:83:d4:5b in network mk-multinode-835787
	I0116 02:34:17.870789  994955 main.go:141] libmachine: (multinode-835787-m02) Calling .GetSSHHostname
	I0116 02:34:17.873431  994955 main.go:141] libmachine: (multinode-835787-m02) DBG | domain multinode-835787-m02 has defined MAC address 52:54:00:83:d4:5b in network mk-multinode-835787
	I0116 02:34:17.873868  994955 main.go:141] libmachine: (multinode-835787-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:d4:5b", ip: ""} in network mk-multinode-835787: {Iface:virbr1 ExpiryTime:2024-01-16 03:24:11 +0000 UTC Type:0 Mac:52:54:00:83:d4:5b Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:multinode-835787-m02 Clientid:01:52:54:00:83:d4:5b}
	I0116 02:34:17.873899  994955 main.go:141] libmachine: (multinode-835787-m02) DBG | domain multinode-835787-m02 has defined IP address 192.168.39.15 and MAC address 52:54:00:83:d4:5b in network mk-multinode-835787
	I0116 02:34:17.874082  994955 provision.go:138] copyHostCerts
	I0116 02:34:17.874129  994955 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17967-971255/.minikube/ca.pem
	I0116 02:34:17.874180  994955 exec_runner.go:144] found /home/jenkins/minikube-integration/17967-971255/.minikube/ca.pem, removing ...
	I0116 02:34:17.874193  994955 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17967-971255/.minikube/ca.pem
	I0116 02:34:17.874277  994955 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17967-971255/.minikube/ca.pem (1082 bytes)
	I0116 02:34:17.874374  994955 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17967-971255/.minikube/cert.pem
	I0116 02:34:17.874399  994955 exec_runner.go:144] found /home/jenkins/minikube-integration/17967-971255/.minikube/cert.pem, removing ...
	I0116 02:34:17.874406  994955 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17967-971255/.minikube/cert.pem
	I0116 02:34:17.874448  994955 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17967-971255/.minikube/cert.pem (1123 bytes)
	I0116 02:34:17.874513  994955 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17967-971255/.minikube/key.pem
	I0116 02:34:17.874536  994955 exec_runner.go:144] found /home/jenkins/minikube-integration/17967-971255/.minikube/key.pem, removing ...
	I0116 02:34:17.874545  994955 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17967-971255/.minikube/key.pem
	I0116 02:34:17.874578  994955 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17967-971255/.minikube/key.pem (1675 bytes)
	I0116 02:34:17.874648  994955 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17967-971255/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca-key.pem org=jenkins.multinode-835787-m02 san=[192.168.39.15 192.168.39.15 localhost 127.0.0.1 minikube multinode-835787-m02]
	I0116 02:34:17.967628  994955 provision.go:172] copyRemoteCerts
	I0116 02:34:17.967709  994955 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0116 02:34:17.967747  994955 main.go:141] libmachine: (multinode-835787-m02) Calling .GetSSHHostname
	I0116 02:34:17.970834  994955 main.go:141] libmachine: (multinode-835787-m02) DBG | domain multinode-835787-m02 has defined MAC address 52:54:00:83:d4:5b in network mk-multinode-835787
	I0116 02:34:17.971222  994955 main.go:141] libmachine: (multinode-835787-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:d4:5b", ip: ""} in network mk-multinode-835787: {Iface:virbr1 ExpiryTime:2024-01-16 03:24:11 +0000 UTC Type:0 Mac:52:54:00:83:d4:5b Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:multinode-835787-m02 Clientid:01:52:54:00:83:d4:5b}
	I0116 02:34:17.971254  994955 main.go:141] libmachine: (multinode-835787-m02) DBG | domain multinode-835787-m02 has defined IP address 192.168.39.15 and MAC address 52:54:00:83:d4:5b in network mk-multinode-835787
	I0116 02:34:17.971507  994955 main.go:141] libmachine: (multinode-835787-m02) Calling .GetSSHPort
	I0116 02:34:17.971711  994955 main.go:141] libmachine: (multinode-835787-m02) Calling .GetSSHKeyPath
	I0116 02:34:17.971876  994955 main.go:141] libmachine: (multinode-835787-m02) Calling .GetSSHUsername
	I0116 02:34:17.972048  994955 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/multinode-835787-m02/id_rsa Username:docker}
	I0116 02:34:18.067831  994955 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0116 02:34:18.067910  994955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0116 02:34:18.090364  994955 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-971255/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0116 02:34:18.090435  994955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0116 02:34:18.113097  994955 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-971255/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0116 02:34:18.113183  994955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0116 02:34:18.135551  994955 provision.go:86] duration metric: configureAuth took 268.394364ms
	I0116 02:34:18.135583  994955 buildroot.go:189] setting minikube options for container-runtime
	I0116 02:34:18.135945  994955 config.go:182] Loaded profile config "multinode-835787": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 02:34:18.136049  994955 main.go:141] libmachine: (multinode-835787-m02) Calling .GetSSHHostname
	I0116 02:34:18.139187  994955 main.go:141] libmachine: (multinode-835787-m02) DBG | domain multinode-835787-m02 has defined MAC address 52:54:00:83:d4:5b in network mk-multinode-835787
	I0116 02:34:18.139631  994955 main.go:141] libmachine: (multinode-835787-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:d4:5b", ip: ""} in network mk-multinode-835787: {Iface:virbr1 ExpiryTime:2024-01-16 03:24:11 +0000 UTC Type:0 Mac:52:54:00:83:d4:5b Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:multinode-835787-m02 Clientid:01:52:54:00:83:d4:5b}
	I0116 02:34:18.139664  994955 main.go:141] libmachine: (multinode-835787-m02) DBG | domain multinode-835787-m02 has defined IP address 192.168.39.15 and MAC address 52:54:00:83:d4:5b in network mk-multinode-835787
	I0116 02:34:18.139875  994955 main.go:141] libmachine: (multinode-835787-m02) Calling .GetSSHPort
	I0116 02:34:18.140117  994955 main.go:141] libmachine: (multinode-835787-m02) Calling .GetSSHKeyPath
	I0116 02:34:18.140302  994955 main.go:141] libmachine: (multinode-835787-m02) Calling .GetSSHKeyPath
	I0116 02:34:18.140505  994955 main.go:141] libmachine: (multinode-835787-m02) Calling .GetSSHUsername
	I0116 02:34:18.140725  994955 main.go:141] libmachine: Using SSH client type: native
	I0116 02:34:18.141091  994955 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.15 22 <nil> <nil>}
	I0116 02:34:18.141109  994955 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0116 02:35:48.720268  994955 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0116 02:35:48.720304  994955 machine.go:91] provisioned docker machine in 1m31.147920261s
	I0116 02:35:48.720318  994955 start.go:300] post-start starting for "multinode-835787-m02" (driver="kvm2")
	I0116 02:35:48.720333  994955 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0116 02:35:48.720364  994955 main.go:141] libmachine: (multinode-835787-m02) Calling .DriverName
	I0116 02:35:48.720813  994955 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0116 02:35:48.720855  994955 main.go:141] libmachine: (multinode-835787-m02) Calling .GetSSHHostname
	I0116 02:35:48.724486  994955 main.go:141] libmachine: (multinode-835787-m02) DBG | domain multinode-835787-m02 has defined MAC address 52:54:00:83:d4:5b in network mk-multinode-835787
	I0116 02:35:48.724997  994955 main.go:141] libmachine: (multinode-835787-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:d4:5b", ip: ""} in network mk-multinode-835787: {Iface:virbr1 ExpiryTime:2024-01-16 03:24:11 +0000 UTC Type:0 Mac:52:54:00:83:d4:5b Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:multinode-835787-m02 Clientid:01:52:54:00:83:d4:5b}
	I0116 02:35:48.725020  994955 main.go:141] libmachine: (multinode-835787-m02) DBG | domain multinode-835787-m02 has defined IP address 192.168.39.15 and MAC address 52:54:00:83:d4:5b in network mk-multinode-835787
	I0116 02:35:48.725271  994955 main.go:141] libmachine: (multinode-835787-m02) Calling .GetSSHPort
	I0116 02:35:48.725530  994955 main.go:141] libmachine: (multinode-835787-m02) Calling .GetSSHKeyPath
	I0116 02:35:48.725717  994955 main.go:141] libmachine: (multinode-835787-m02) Calling .GetSSHUsername
	I0116 02:35:48.725898  994955 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/multinode-835787-m02/id_rsa Username:docker}
	I0116 02:35:48.827798  994955 ssh_runner.go:195] Run: cat /etc/os-release
	I0116 02:35:48.832469  994955 command_runner.go:130] > NAME=Buildroot
	I0116 02:35:48.832494  994955 command_runner.go:130] > VERSION=2021.02.12-1-g19d536a-dirty
	I0116 02:35:48.832499  994955 command_runner.go:130] > ID=buildroot
	I0116 02:35:48.832506  994955 command_runner.go:130] > VERSION_ID=2021.02.12
	I0116 02:35:48.832514  994955 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0116 02:35:48.832713  994955 info.go:137] Remote host: Buildroot 2021.02.12
	I0116 02:35:48.832736  994955 filesync.go:126] Scanning /home/jenkins/minikube-integration/17967-971255/.minikube/addons for local assets ...
	I0116 02:35:48.832802  994955 filesync.go:126] Scanning /home/jenkins/minikube-integration/17967-971255/.minikube/files for local assets ...
	I0116 02:35:48.832895  994955 filesync.go:149] local asset: /home/jenkins/minikube-integration/17967-971255/.minikube/files/etc/ssl/certs/9784822.pem -> 9784822.pem in /etc/ssl/certs
	I0116 02:35:48.832942  994955 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-971255/.minikube/files/etc/ssl/certs/9784822.pem -> /etc/ssl/certs/9784822.pem
	I0116 02:35:48.833105  994955 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0116 02:35:48.842645  994955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/files/etc/ssl/certs/9784822.pem --> /etc/ssl/certs/9784822.pem (1708 bytes)
	I0116 02:35:48.867448  994955 start.go:303] post-start completed in 147.109232ms
	I0116 02:35:48.867489  994955 fix.go:56] fixHost completed within 1m31.318058508s
	I0116 02:35:48.867519  994955 main.go:141] libmachine: (multinode-835787-m02) Calling .GetSSHHostname
	I0116 02:35:48.870239  994955 main.go:141] libmachine: (multinode-835787-m02) DBG | domain multinode-835787-m02 has defined MAC address 52:54:00:83:d4:5b in network mk-multinode-835787
	I0116 02:35:48.870619  994955 main.go:141] libmachine: (multinode-835787-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:d4:5b", ip: ""} in network mk-multinode-835787: {Iface:virbr1 ExpiryTime:2024-01-16 03:24:11 +0000 UTC Type:0 Mac:52:54:00:83:d4:5b Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:multinode-835787-m02 Clientid:01:52:54:00:83:d4:5b}
	I0116 02:35:48.870652  994955 main.go:141] libmachine: (multinode-835787-m02) DBG | domain multinode-835787-m02 has defined IP address 192.168.39.15 and MAC address 52:54:00:83:d4:5b in network mk-multinode-835787
	I0116 02:35:48.870793  994955 main.go:141] libmachine: (multinode-835787-m02) Calling .GetSSHPort
	I0116 02:35:48.871013  994955 main.go:141] libmachine: (multinode-835787-m02) Calling .GetSSHKeyPath
	I0116 02:35:48.871234  994955 main.go:141] libmachine: (multinode-835787-m02) Calling .GetSSHKeyPath
	I0116 02:35:48.871378  994955 main.go:141] libmachine: (multinode-835787-m02) Calling .GetSSHUsername
	I0116 02:35:48.871544  994955 main.go:141] libmachine: Using SSH client type: native
	I0116 02:35:48.871880  994955 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.15 22 <nil> <nil>}
	I0116 02:35:48.871894  994955 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0116 02:35:49.006835  994955 main.go:141] libmachine: SSH cmd err, output: <nil>: 1705372549.000465083
	
	I0116 02:35:49.006863  994955 fix.go:206] guest clock: 1705372549.000465083
	I0116 02:35:49.006872  994955 fix.go:219] Guest: 2024-01-16 02:35:49.000465083 +0000 UTC Remote: 2024-01-16 02:35:48.867494524 +0000 UTC m=+454.723840270 (delta=132.970559ms)
	I0116 02:35:49.006888  994955 fix.go:190] guest clock delta is within tolerance: 132.970559ms
	I0116 02:35:49.006893  994955 start.go:83] releasing machines lock for "multinode-835787-m02", held for 1m31.457482228s
	I0116 02:35:49.006912  994955 main.go:141] libmachine: (multinode-835787-m02) Calling .DriverName
	I0116 02:35:49.007193  994955 main.go:141] libmachine: (multinode-835787-m02) Calling .GetIP
	I0116 02:35:49.009562  994955 main.go:141] libmachine: (multinode-835787-m02) DBG | domain multinode-835787-m02 has defined MAC address 52:54:00:83:d4:5b in network mk-multinode-835787
	I0116 02:35:49.009966  994955 main.go:141] libmachine: (multinode-835787-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:d4:5b", ip: ""} in network mk-multinode-835787: {Iface:virbr1 ExpiryTime:2024-01-16 03:24:11 +0000 UTC Type:0 Mac:52:54:00:83:d4:5b Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:multinode-835787-m02 Clientid:01:52:54:00:83:d4:5b}
	I0116 02:35:49.010002  994955 main.go:141] libmachine: (multinode-835787-m02) DBG | domain multinode-835787-m02 has defined IP address 192.168.39.15 and MAC address 52:54:00:83:d4:5b in network mk-multinode-835787
	I0116 02:35:49.012081  994955 out.go:177] * Found network options:
	I0116 02:35:49.013787  994955 out.go:177]   - NO_PROXY=192.168.39.50
	W0116 02:35:49.016063  994955 proxy.go:119] fail to check proxy env: Error ip not in block
	I0116 02:35:49.016100  994955 main.go:141] libmachine: (multinode-835787-m02) Calling .DriverName
	I0116 02:35:49.016683  994955 main.go:141] libmachine: (multinode-835787-m02) Calling .DriverName
	I0116 02:35:49.016850  994955 main.go:141] libmachine: (multinode-835787-m02) Calling .DriverName
	I0116 02:35:49.016939  994955 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0116 02:35:49.016978  994955 main.go:141] libmachine: (multinode-835787-m02) Calling .GetSSHHostname
	W0116 02:35:49.017073  994955 proxy.go:119] fail to check proxy env: Error ip not in block
	I0116 02:35:49.017174  994955 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0116 02:35:49.017195  994955 main.go:141] libmachine: (multinode-835787-m02) Calling .GetSSHHostname
	I0116 02:35:49.019850  994955 main.go:141] libmachine: (multinode-835787-m02) DBG | domain multinode-835787-m02 has defined MAC address 52:54:00:83:d4:5b in network mk-multinode-835787
	I0116 02:35:49.019874  994955 main.go:141] libmachine: (multinode-835787-m02) DBG | domain multinode-835787-m02 has defined MAC address 52:54:00:83:d4:5b in network mk-multinode-835787
	I0116 02:35:49.020370  994955 main.go:141] libmachine: (multinode-835787-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:d4:5b", ip: ""} in network mk-multinode-835787: {Iface:virbr1 ExpiryTime:2024-01-16 03:24:11 +0000 UTC Type:0 Mac:52:54:00:83:d4:5b Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:multinode-835787-m02 Clientid:01:52:54:00:83:d4:5b}
	I0116 02:35:49.020406  994955 main.go:141] libmachine: (multinode-835787-m02) DBG | domain multinode-835787-m02 has defined IP address 192.168.39.15 and MAC address 52:54:00:83:d4:5b in network mk-multinode-835787
	I0116 02:35:49.020462  994955 main.go:141] libmachine: (multinode-835787-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:d4:5b", ip: ""} in network mk-multinode-835787: {Iface:virbr1 ExpiryTime:2024-01-16 03:24:11 +0000 UTC Type:0 Mac:52:54:00:83:d4:5b Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:multinode-835787-m02 Clientid:01:52:54:00:83:d4:5b}
	I0116 02:35:49.020497  994955 main.go:141] libmachine: (multinode-835787-m02) DBG | domain multinode-835787-m02 has defined IP address 192.168.39.15 and MAC address 52:54:00:83:d4:5b in network mk-multinode-835787
	I0116 02:35:49.020610  994955 main.go:141] libmachine: (multinode-835787-m02) Calling .GetSSHPort
	I0116 02:35:49.020707  994955 main.go:141] libmachine: (multinode-835787-m02) Calling .GetSSHPort
	I0116 02:35:49.020768  994955 main.go:141] libmachine: (multinode-835787-m02) Calling .GetSSHKeyPath
	I0116 02:35:49.020871  994955 main.go:141] libmachine: (multinode-835787-m02) Calling .GetSSHKeyPath
	I0116 02:35:49.020951  994955 main.go:141] libmachine: (multinode-835787-m02) Calling .GetSSHUsername
	I0116 02:35:49.021015  994955 main.go:141] libmachine: (multinode-835787-m02) Calling .GetSSHUsername
	I0116 02:35:49.021078  994955 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/multinode-835787-m02/id_rsa Username:docker}
	I0116 02:35:49.021124  994955 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/multinode-835787-m02/id_rsa Username:docker}
	I0116 02:35:49.258625  994955 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0116 02:35:49.258625  994955 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0116 02:35:49.265744  994955 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0116 02:35:49.265831  994955 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0116 02:35:49.265897  994955 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0116 02:35:49.275183  994955 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0116 02:35:49.275224  994955 start.go:475] detecting cgroup driver to use...
	I0116 02:35:49.275316  994955 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0116 02:35:49.291021  994955 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0116 02:35:49.304884  994955 docker.go:217] disabling cri-docker service (if available) ...
	I0116 02:35:49.304956  994955 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0116 02:35:49.318966  994955 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0116 02:35:49.332048  994955 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0116 02:35:49.480950  994955 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0116 02:35:49.617318  994955 docker.go:233] disabling docker service ...
	I0116 02:35:49.617389  994955 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0116 02:35:49.632024  994955 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0116 02:35:49.644871  994955 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0116 02:35:49.773839  994955 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0116 02:35:49.906421  994955 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0116 02:35:49.919199  994955 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0116 02:35:49.937417  994955 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0116 02:35:49.937789  994955 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0116 02:35:49.937867  994955 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 02:35:49.947745  994955 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0116 02:35:49.947824  994955 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 02:35:49.957412  994955 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 02:35:49.966818  994955 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 02:35:49.976473  994955 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0116 02:35:49.986185  994955 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0116 02:35:49.994406  994955 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0116 02:35:49.994602  994955 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0116 02:35:50.003990  994955 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0116 02:35:50.133128  994955 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0116 02:35:51.979901  994955 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.846722428s)
	I0116 02:35:51.979949  994955 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0116 02:35:51.980018  994955 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0116 02:35:51.985299  994955 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0116 02:35:51.985322  994955 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0116 02:35:51.985329  994955 command_runner.go:130] > Device: 16h/22d	Inode: 1217        Links: 1
	I0116 02:35:51.985336  994955 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0116 02:35:51.985341  994955 command_runner.go:130] > Access: 2024-01-16 02:35:51.906427422 +0000
	I0116 02:35:51.985347  994955 command_runner.go:130] > Modify: 2024-01-16 02:35:51.906427422 +0000
	I0116 02:35:51.985352  994955 command_runner.go:130] > Change: 2024-01-16 02:35:51.907427487 +0000
	I0116 02:35:51.985356  994955 command_runner.go:130] >  Birth: -
	I0116 02:35:51.985686  994955 start.go:543] Will wait 60s for crictl version
	I0116 02:35:51.985751  994955 ssh_runner.go:195] Run: which crictl
	I0116 02:35:51.989547  994955 command_runner.go:130] > /usr/bin/crictl
	I0116 02:35:51.989677  994955 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0116 02:35:52.031058  994955 command_runner.go:130] > Version:  0.1.0
	I0116 02:35:52.031090  994955 command_runner.go:130] > RuntimeName:  cri-o
	I0116 02:35:52.031098  994955 command_runner.go:130] > RuntimeVersion:  1.24.1
	I0116 02:35:52.031106  994955 command_runner.go:130] > RuntimeApiVersion:  v1
	I0116 02:35:52.031132  994955 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0116 02:35:52.031209  994955 ssh_runner.go:195] Run: crio --version
	I0116 02:35:52.084696  994955 command_runner.go:130] > crio version 1.24.1
	I0116 02:35:52.084731  994955 command_runner.go:130] > Version:          1.24.1
	I0116 02:35:52.084747  994955 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0116 02:35:52.084755  994955 command_runner.go:130] > GitTreeState:     dirty
	I0116 02:35:52.084768  994955 command_runner.go:130] > BuildDate:        2023-12-28T22:46:29Z
	I0116 02:35:52.084776  994955 command_runner.go:130] > GoVersion:        go1.19.9
	I0116 02:35:52.084782  994955 command_runner.go:130] > Compiler:         gc
	I0116 02:35:52.084790  994955 command_runner.go:130] > Platform:         linux/amd64
	I0116 02:35:52.084799  994955 command_runner.go:130] > Linkmode:         dynamic
	I0116 02:35:52.084811  994955 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0116 02:35:52.084823  994955 command_runner.go:130] > SeccompEnabled:   true
	I0116 02:35:52.084838  994955 command_runner.go:130] > AppArmorEnabled:  false
	I0116 02:35:52.086352  994955 ssh_runner.go:195] Run: crio --version
	I0116 02:35:52.139215  994955 command_runner.go:130] > crio version 1.24.1
	I0116 02:35:52.139241  994955 command_runner.go:130] > Version:          1.24.1
	I0116 02:35:52.139253  994955 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0116 02:35:52.139260  994955 command_runner.go:130] > GitTreeState:     dirty
	I0116 02:35:52.139269  994955 command_runner.go:130] > BuildDate:        2023-12-28T22:46:29Z
	I0116 02:35:52.139277  994955 command_runner.go:130] > GoVersion:        go1.19.9
	I0116 02:35:52.139288  994955 command_runner.go:130] > Compiler:         gc
	I0116 02:35:52.139314  994955 command_runner.go:130] > Platform:         linux/amd64
	I0116 02:35:52.139333  994955 command_runner.go:130] > Linkmode:         dynamic
	I0116 02:35:52.139350  994955 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0116 02:35:52.139357  994955 command_runner.go:130] > SeccompEnabled:   true
	I0116 02:35:52.139364  994955 command_runner.go:130] > AppArmorEnabled:  false
	I0116 02:35:52.142722  994955 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0116 02:35:52.144526  994955 out.go:177]   - env NO_PROXY=192.168.39.50
	I0116 02:35:52.145952  994955 main.go:141] libmachine: (multinode-835787-m02) Calling .GetIP
	I0116 02:35:52.148905  994955 main.go:141] libmachine: (multinode-835787-m02) DBG | domain multinode-835787-m02 has defined MAC address 52:54:00:83:d4:5b in network mk-multinode-835787
	I0116 02:35:52.149303  994955 main.go:141] libmachine: (multinode-835787-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:d4:5b", ip: ""} in network mk-multinode-835787: {Iface:virbr1 ExpiryTime:2024-01-16 03:24:11 +0000 UTC Type:0 Mac:52:54:00:83:d4:5b Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:multinode-835787-m02 Clientid:01:52:54:00:83:d4:5b}
	I0116 02:35:52.149343  994955 main.go:141] libmachine: (multinode-835787-m02) DBG | domain multinode-835787-m02 has defined IP address 192.168.39.15 and MAC address 52:54:00:83:d4:5b in network mk-multinode-835787
	I0116 02:35:52.149587  994955 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0116 02:35:52.154551  994955 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0116 02:35:52.154627  994955 certs.go:56] Setting up /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/multinode-835787 for IP: 192.168.39.15
	I0116 02:35:52.154661  994955 certs.go:190] acquiring lock for shared ca certs: {Name:mk7c5920652340a030ac1e36e0fe21cc8437c491 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:35:52.154865  994955 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17967-971255/.minikube/ca.key
	I0116 02:35:52.154926  994955 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17967-971255/.minikube/proxy-client-ca.key
	I0116 02:35:52.154946  994955 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-971255/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0116 02:35:52.154971  994955 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-971255/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0116 02:35:52.154996  994955 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-971255/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0116 02:35:52.155015  994955 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-971255/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0116 02:35:52.155090  994955 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/home/jenkins/minikube-integration/17967-971255/.minikube/certs/978482.pem (1338 bytes)
	W0116 02:35:52.155138  994955 certs.go:433] ignoring /home/jenkins/minikube-integration/17967-971255/.minikube/certs/home/jenkins/minikube-integration/17967-971255/.minikube/certs/978482_empty.pem, impossibly tiny 0 bytes
	I0116 02:35:52.155157  994955 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca-key.pem (1675 bytes)
	I0116 02:35:52.155197  994955 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca.pem (1082 bytes)
	I0116 02:35:52.155234  994955 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/home/jenkins/minikube-integration/17967-971255/.minikube/certs/cert.pem (1123 bytes)
	I0116 02:35:52.155273  994955 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/home/jenkins/minikube-integration/17967-971255/.minikube/certs/key.pem (1675 bytes)
	I0116 02:35:52.155335  994955 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-971255/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17967-971255/.minikube/files/etc/ssl/certs/9784822.pem (1708 bytes)
	I0116 02:35:52.155379  994955 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/978482.pem -> /usr/share/ca-certificates/978482.pem
	I0116 02:35:52.155403  994955 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-971255/.minikube/files/etc/ssl/certs/9784822.pem -> /usr/share/ca-certificates/9784822.pem
	I0116 02:35:52.155421  994955 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-971255/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0116 02:35:52.155967  994955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0116 02:35:52.181407  994955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0116 02:35:52.206330  994955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0116 02:35:52.231171  994955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0116 02:35:52.255881  994955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/certs/978482.pem --> /usr/share/ca-certificates/978482.pem (1338 bytes)
	I0116 02:35:52.280979  994955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/files/etc/ssl/certs/9784822.pem --> /usr/share/ca-certificates/9784822.pem (1708 bytes)
	I0116 02:35:52.306578  994955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0116 02:35:52.330761  994955 ssh_runner.go:195] Run: openssl version
	I0116 02:35:52.337099  994955 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0116 02:35:52.337200  994955 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9784822.pem && ln -fs /usr/share/ca-certificates/9784822.pem /etc/ssl/certs/9784822.pem"
	I0116 02:35:52.347929  994955 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9784822.pem
	I0116 02:35:52.352878  994955 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jan 16 02:10 /usr/share/ca-certificates/9784822.pem
	I0116 02:35:52.352913  994955 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 16 02:10 /usr/share/ca-certificates/9784822.pem
	I0116 02:35:52.352970  994955 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9784822.pem
	I0116 02:35:52.358626  994955 command_runner.go:130] > 3ec20f2e
	I0116 02:35:52.358923  994955 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9784822.pem /etc/ssl/certs/3ec20f2e.0"
	I0116 02:35:52.367793  994955 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0116 02:35:52.378285  994955 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0116 02:35:52.383272  994955 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jan 16 02:01 /usr/share/ca-certificates/minikubeCA.pem
	I0116 02:35:52.383308  994955 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 16 02:01 /usr/share/ca-certificates/minikubeCA.pem
	I0116 02:35:52.383352  994955 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0116 02:35:52.388898  994955 command_runner.go:130] > b5213941
	I0116 02:35:52.389015  994955 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0116 02:35:52.397692  994955 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/978482.pem && ln -fs /usr/share/ca-certificates/978482.pem /etc/ssl/certs/978482.pem"
	I0116 02:35:52.408193  994955 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/978482.pem
	I0116 02:35:52.412930  994955 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jan 16 02:10 /usr/share/ca-certificates/978482.pem
	I0116 02:35:52.413198  994955 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 16 02:10 /usr/share/ca-certificates/978482.pem
	I0116 02:35:52.413267  994955 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/978482.pem
	I0116 02:35:52.419050  994955 command_runner.go:130] > 51391683
	I0116 02:35:52.419127  994955 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/978482.pem /etc/ssl/certs/51391683.0"
	I0116 02:35:52.428297  994955 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0116 02:35:52.432619  994955 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0116 02:35:52.432668  994955 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0116 02:35:52.432765  994955 ssh_runner.go:195] Run: crio config
	I0116 02:35:52.489907  994955 command_runner.go:130] ! time="2024-01-16 02:35:52.483628518Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I0116 02:35:52.489948  994955 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0116 02:35:52.495850  994955 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0116 02:35:52.495876  994955 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0116 02:35:52.495884  994955 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0116 02:35:52.495888  994955 command_runner.go:130] > #
	I0116 02:35:52.495895  994955 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0116 02:35:52.495901  994955 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0116 02:35:52.495907  994955 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0116 02:35:52.495914  994955 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0116 02:35:52.495917  994955 command_runner.go:130] > # reload'.
	I0116 02:35:52.495926  994955 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0116 02:35:52.495935  994955 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0116 02:35:52.495945  994955 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0116 02:35:52.495954  994955 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0116 02:35:52.495964  994955 command_runner.go:130] > [crio]
	I0116 02:35:52.495974  994955 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0116 02:35:52.495986  994955 command_runner.go:130] > # containers images, in this directory.
	I0116 02:35:52.495998  994955 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0116 02:35:52.496013  994955 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0116 02:35:52.496024  994955 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0116 02:35:52.496032  994955 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0116 02:35:52.496040  994955 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0116 02:35:52.496045  994955 command_runner.go:130] > storage_driver = "overlay"
	I0116 02:35:52.496053  994955 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0116 02:35:52.496061  994955 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0116 02:35:52.496066  994955 command_runner.go:130] > storage_option = [
	I0116 02:35:52.496076  994955 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0116 02:35:52.496082  994955 command_runner.go:130] > ]
	I0116 02:35:52.496094  994955 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0116 02:35:52.496107  994955 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0116 02:35:52.496118  994955 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0116 02:35:52.496128  994955 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0116 02:35:52.496138  994955 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0116 02:35:52.496145  994955 command_runner.go:130] > # always happen on a node reboot
	I0116 02:35:52.496150  994955 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0116 02:35:52.496168  994955 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0116 02:35:52.496182  994955 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0116 02:35:52.496198  994955 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0116 02:35:52.496210  994955 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0116 02:35:52.496225  994955 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0116 02:35:52.496237  994955 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0116 02:35:52.496246  994955 command_runner.go:130] > # internal_wipe = true
	I0116 02:35:52.496260  994955 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0116 02:35:52.496274  994955 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0116 02:35:52.496286  994955 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0116 02:35:52.496299  994955 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0116 02:35:52.496312  994955 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0116 02:35:52.496319  994955 command_runner.go:130] > [crio.api]
	I0116 02:35:52.496325  994955 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0116 02:35:52.496335  994955 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0116 02:35:52.496348  994955 command_runner.go:130] > # IP address on which the stream server will listen.
	I0116 02:35:52.496359  994955 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0116 02:35:52.496375  994955 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0116 02:35:52.496388  994955 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0116 02:35:52.496398  994955 command_runner.go:130] > # stream_port = "0"
	I0116 02:35:52.496406  994955 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0116 02:35:52.496412  994955 command_runner.go:130] > # stream_enable_tls = false
	I0116 02:35:52.496426  994955 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0116 02:35:52.496436  994955 command_runner.go:130] > # stream_idle_timeout = ""
	I0116 02:35:52.496450  994955 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0116 02:35:52.496464  994955 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0116 02:35:52.496475  994955 command_runner.go:130] > # minutes.
	I0116 02:35:52.496485  994955 command_runner.go:130] > # stream_tls_cert = ""
	I0116 02:35:52.496494  994955 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0116 02:35:52.496507  994955 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0116 02:35:52.496519  994955 command_runner.go:130] > # stream_tls_key = ""
	I0116 02:35:52.496531  994955 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0116 02:35:52.496545  994955 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0116 02:35:52.496558  994955 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0116 02:35:52.496568  994955 command_runner.go:130] > # stream_tls_ca = ""
	I0116 02:35:52.496578  994955 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0116 02:35:52.496588  994955 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0116 02:35:52.496604  994955 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0116 02:35:52.496615  994955 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0116 02:35:52.496639  994955 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0116 02:35:52.496651  994955 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0116 02:35:52.496659  994955 command_runner.go:130] > [crio.runtime]
	I0116 02:35:52.496665  994955 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0116 02:35:52.496677  994955 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0116 02:35:52.496687  994955 command_runner.go:130] > # "nofile=1024:2048"
	I0116 02:35:52.496707  994955 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0116 02:35:52.496717  994955 command_runner.go:130] > # default_ulimits = [
	I0116 02:35:52.496726  994955 command_runner.go:130] > # ]
	I0116 02:35:52.496739  994955 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0116 02:35:52.496747  994955 command_runner.go:130] > # no_pivot = false
	I0116 02:35:52.496755  994955 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0116 02:35:52.496768  994955 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0116 02:35:52.496780  994955 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0116 02:35:52.496793  994955 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0116 02:35:52.496805  994955 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0116 02:35:52.496819  994955 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0116 02:35:52.496828  994955 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0116 02:35:52.496832  994955 command_runner.go:130] > # Cgroup setting for conmon
	I0116 02:35:52.496845  994955 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0116 02:35:52.496856  994955 command_runner.go:130] > conmon_cgroup = "pod"
	I0116 02:35:52.496869  994955 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0116 02:35:52.496881  994955 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0116 02:35:52.496894  994955 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0116 02:35:52.496904  994955 command_runner.go:130] > conmon_env = [
	I0116 02:35:52.496914  994955 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0116 02:35:52.496918  994955 command_runner.go:130] > ]
	I0116 02:35:52.496930  994955 command_runner.go:130] > # Additional environment variables to set for all the
	I0116 02:35:52.496943  994955 command_runner.go:130] > # containers. These are overridden if set in the
	I0116 02:35:52.496956  994955 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0116 02:35:52.496966  994955 command_runner.go:130] > # default_env = [
	I0116 02:35:52.496975  994955 command_runner.go:130] > # ]
	I0116 02:35:52.496987  994955 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0116 02:35:52.496996  994955 command_runner.go:130] > # selinux = false
	I0116 02:35:52.497003  994955 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0116 02:35:52.497015  994955 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0116 02:35:52.497029  994955 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0116 02:35:52.497039  994955 command_runner.go:130] > # seccomp_profile = ""
	I0116 02:35:52.497051  994955 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0116 02:35:52.497064  994955 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0116 02:35:52.497077  994955 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0116 02:35:52.497085  994955 command_runner.go:130] > # which might increase security.
	I0116 02:35:52.497090  994955 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0116 02:35:52.497100  994955 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0116 02:35:52.497115  994955 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0116 02:35:52.497128  994955 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0116 02:35:52.497141  994955 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0116 02:35:52.497152  994955 command_runner.go:130] > # This option supports live configuration reload.
	I0116 02:35:52.497165  994955 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0116 02:35:52.497173  994955 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0116 02:35:52.497180  994955 command_runner.go:130] > # the cgroup blockio controller.
	I0116 02:35:52.497211  994955 command_runner.go:130] > # blockio_config_file = ""
	I0116 02:35:52.497225  994955 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0116 02:35:52.497235  994955 command_runner.go:130] > # irqbalance daemon.
	I0116 02:35:52.497247  994955 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0116 02:35:52.497258  994955 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0116 02:35:52.497266  994955 command_runner.go:130] > # This option supports live configuration reload.
	I0116 02:35:52.497277  994955 command_runner.go:130] > # rdt_config_file = ""
	I0116 02:35:52.497290  994955 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0116 02:35:52.497300  994955 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0116 02:35:52.497313  994955 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0116 02:35:52.497323  994955 command_runner.go:130] > # separate_pull_cgroup = ""
	I0116 02:35:52.497336  994955 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0116 02:35:52.497344  994955 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0116 02:35:52.497354  994955 command_runner.go:130] > # will be added.
	I0116 02:35:52.497365  994955 command_runner.go:130] > # default_capabilities = [
	I0116 02:35:52.497375  994955 command_runner.go:130] > # 	"CHOWN",
	I0116 02:35:52.497385  994955 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0116 02:35:52.497395  994955 command_runner.go:130] > # 	"FSETID",
	I0116 02:35:52.497407  994955 command_runner.go:130] > # 	"FOWNER",
	I0116 02:35:52.497416  994955 command_runner.go:130] > # 	"SETGID",
	I0116 02:35:52.497424  994955 command_runner.go:130] > # 	"SETUID",
	I0116 02:35:52.497428  994955 command_runner.go:130] > # 	"SETPCAP",
	I0116 02:35:52.497437  994955 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0116 02:35:52.497444  994955 command_runner.go:130] > # 	"KILL",
	I0116 02:35:52.497454  994955 command_runner.go:130] > # ]
	I0116 02:35:52.497467  994955 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0116 02:35:52.497480  994955 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0116 02:35:52.497490  994955 command_runner.go:130] > # default_sysctls = [
	I0116 02:35:52.497499  994955 command_runner.go:130] > # ]
	I0116 02:35:52.497508  994955 command_runner.go:130] > # List of devices on the host that a
	I0116 02:35:52.497517  994955 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0116 02:35:52.497527  994955 command_runner.go:130] > # allowed_devices = [
	I0116 02:35:52.497535  994955 command_runner.go:130] > # 	"/dev/fuse",
	I0116 02:35:52.497544  994955 command_runner.go:130] > # ]
	I0116 02:35:52.497555  994955 command_runner.go:130] > # List of additional devices. specified as
	I0116 02:35:52.497571  994955 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0116 02:35:52.497586  994955 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0116 02:35:52.497612  994955 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0116 02:35:52.497623  994955 command_runner.go:130] > # additional_devices = [
	I0116 02:35:52.497629  994955 command_runner.go:130] > # ]
	I0116 02:35:52.497641  994955 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0116 02:35:52.497650  994955 command_runner.go:130] > # cdi_spec_dirs = [
	I0116 02:35:52.497660  994955 command_runner.go:130] > # 	"/etc/cdi",
	I0116 02:35:52.497670  994955 command_runner.go:130] > # 	"/var/run/cdi",
	I0116 02:35:52.497678  994955 command_runner.go:130] > # ]
	I0116 02:35:52.497688  994955 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0116 02:35:52.497705  994955 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0116 02:35:52.497716  994955 command_runner.go:130] > # Defaults to false.
	I0116 02:35:52.497725  994955 command_runner.go:130] > # device_ownership_from_security_context = false
	I0116 02:35:52.497738  994955 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0116 02:35:52.497751  994955 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0116 02:35:52.497761  994955 command_runner.go:130] > # hooks_dir = [
	I0116 02:35:52.497770  994955 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0116 02:35:52.497778  994955 command_runner.go:130] > # ]
	I0116 02:35:52.497795  994955 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0116 02:35:52.497821  994955 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0116 02:35:52.497830  994955 command_runner.go:130] > # its default mounts from the following two files:
	I0116 02:35:52.497839  994955 command_runner.go:130] > #
	I0116 02:35:52.497850  994955 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0116 02:35:52.497863  994955 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0116 02:35:52.497877  994955 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0116 02:35:52.497886  994955 command_runner.go:130] > #
	I0116 02:35:52.497901  994955 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0116 02:35:52.497914  994955 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0116 02:35:52.497927  994955 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0116 02:35:52.497936  994955 command_runner.go:130] > #      only add mounts it finds in this file.
	I0116 02:35:52.497941  994955 command_runner.go:130] > #
	I0116 02:35:52.497950  994955 command_runner.go:130] > # default_mounts_file = ""
	I0116 02:35:52.497963  994955 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0116 02:35:52.497978  994955 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0116 02:35:52.497987  994955 command_runner.go:130] > pids_limit = 1024
	I0116 02:35:52.498003  994955 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0116 02:35:52.498015  994955 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0116 02:35:52.498026  994955 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0116 02:35:52.498042  994955 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0116 02:35:52.498053  994955 command_runner.go:130] > # log_size_max = -1
	I0116 02:35:52.498067  994955 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0116 02:35:52.498077  994955 command_runner.go:130] > # log_to_journald = false
	I0116 02:35:52.498090  994955 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0116 02:35:52.498100  994955 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0116 02:35:52.498108  994955 command_runner.go:130] > # Path to directory for container attach sockets.
	I0116 02:35:52.498119  994955 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0116 02:35:52.498133  994955 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0116 02:35:52.498143  994955 command_runner.go:130] > # bind_mount_prefix = ""
	I0116 02:35:52.498155  994955 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0116 02:35:52.498165  994955 command_runner.go:130] > # read_only = false
	I0116 02:35:52.498179  994955 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0116 02:35:52.498189  994955 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0116 02:35:52.498198  994955 command_runner.go:130] > # live configuration reload.
	I0116 02:35:52.498208  994955 command_runner.go:130] > # log_level = "info"
	I0116 02:35:52.498222  994955 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0116 02:35:52.498233  994955 command_runner.go:130] > # This option supports live configuration reload.
	I0116 02:35:52.498243  994955 command_runner.go:130] > # log_filter = ""
	I0116 02:35:52.498255  994955 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0116 02:35:52.498268  994955 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0116 02:35:52.498275  994955 command_runner.go:130] > # separated by comma.
	I0116 02:35:52.498280  994955 command_runner.go:130] > # uid_mappings = ""
	I0116 02:35:52.498293  994955 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0116 02:35:52.498307  994955 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0116 02:35:52.498317  994955 command_runner.go:130] > # separated by comma.
	I0116 02:35:52.498327  994955 command_runner.go:130] > # gid_mappings = ""
	I0116 02:35:52.498340  994955 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0116 02:35:52.498354  994955 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0116 02:35:52.498363  994955 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0116 02:35:52.498373  994955 command_runner.go:130] > # minimum_mappable_uid = -1
	I0116 02:35:52.498387  994955 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0116 02:35:52.498400  994955 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0116 02:35:52.498413  994955 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0116 02:35:52.498425  994955 command_runner.go:130] > # minimum_mappable_gid = -1
	I0116 02:35:52.498437  994955 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0116 02:35:52.498446  994955 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0116 02:35:52.498453  994955 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0116 02:35:52.498464  994955 command_runner.go:130] > # ctr_stop_timeout = 30
	I0116 02:35:52.498477  994955 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0116 02:35:52.498490  994955 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0116 02:35:52.498502  994955 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0116 02:35:52.498513  994955 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0116 02:35:52.498524  994955 command_runner.go:130] > drop_infra_ctr = false
	I0116 02:35:52.498533  994955 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0116 02:35:52.498542  994955 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0116 02:35:52.498558  994955 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0116 02:35:52.498568  994955 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0116 02:35:52.498581  994955 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0116 02:35:52.498593  994955 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0116 02:35:52.498603  994955 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0116 02:35:52.498615  994955 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0116 02:35:52.498624  994955 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0116 02:35:52.498635  994955 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0116 02:35:52.498649  994955 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0116 02:35:52.498663  994955 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0116 02:35:52.498673  994955 command_runner.go:130] > # default_runtime = "runc"
	I0116 02:35:52.498685  994955 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0116 02:35:52.498701  994955 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0116 02:35:52.498717  994955 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0116 02:35:52.498729  994955 command_runner.go:130] > # creation as a file is not desired either.
	I0116 02:35:52.498745  994955 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0116 02:35:52.498757  994955 command_runner.go:130] > # the hostname is being managed dynamically.
	I0116 02:35:52.498768  994955 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0116 02:35:52.498776  994955 command_runner.go:130] > # ]
	I0116 02:35:52.498786  994955 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0116 02:35:52.498800  994955 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0116 02:35:52.498815  994955 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0116 02:35:52.498828  994955 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0116 02:35:52.498837  994955 command_runner.go:130] > #
	I0116 02:35:52.498849  994955 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0116 02:35:52.498872  994955 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0116 02:35:52.498883  994955 command_runner.go:130] > #  runtime_type = "oci"
	I0116 02:35:52.498892  994955 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0116 02:35:52.498904  994955 command_runner.go:130] > #  privileged_without_host_devices = false
	I0116 02:35:52.498914  994955 command_runner.go:130] > #  allowed_annotations = []
	I0116 02:35:52.498923  994955 command_runner.go:130] > # Where:
	I0116 02:35:52.498935  994955 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0116 02:35:52.498948  994955 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0116 02:35:52.498960  994955 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0116 02:35:52.498974  994955 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0116 02:35:52.498984  994955 command_runner.go:130] > #   in $PATH.
	I0116 02:35:52.498998  994955 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0116 02:35:52.499009  994955 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0116 02:35:52.499022  994955 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0116 02:35:52.499031  994955 command_runner.go:130] > #   state.
	I0116 02:35:52.499041  994955 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0116 02:35:52.499053  994955 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0116 02:35:52.499068  994955 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0116 02:35:52.499080  994955 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0116 02:35:52.499093  994955 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0116 02:35:52.499107  994955 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0116 02:35:52.499117  994955 command_runner.go:130] > #   The currently recognized values are:
	I0116 02:35:52.499126  994955 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0116 02:35:52.499141  994955 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0116 02:35:52.499156  994955 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0116 02:35:52.499169  994955 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0116 02:35:52.499184  994955 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0116 02:35:52.499197  994955 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0116 02:35:52.499206  994955 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0116 02:35:52.499218  994955 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0116 02:35:52.499231  994955 command_runner.go:130] > #   should be moved to the container's cgroup
	I0116 02:35:52.499242  994955 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0116 02:35:52.499249  994955 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0116 02:35:52.499259  994955 command_runner.go:130] > runtime_type = "oci"
	I0116 02:35:52.499269  994955 command_runner.go:130] > runtime_root = "/run/runc"
	I0116 02:35:52.499280  994955 command_runner.go:130] > runtime_config_path = ""
	I0116 02:35:52.499289  994955 command_runner.go:130] > monitor_path = ""
	I0116 02:35:52.499295  994955 command_runner.go:130] > monitor_cgroup = ""
	I0116 02:35:52.499302  994955 command_runner.go:130] > monitor_exec_cgroup = ""
	I0116 02:35:52.499317  994955 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0116 02:35:52.499328  994955 command_runner.go:130] > # running containers
	I0116 02:35:52.499336  994955 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0116 02:35:52.499349  994955 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0116 02:35:52.499380  994955 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0116 02:35:52.499394  994955 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0116 02:35:52.499406  994955 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0116 02:35:52.499417  994955 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0116 02:35:52.499427  994955 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0116 02:35:52.499438  994955 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0116 02:35:52.499449  994955 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0116 02:35:52.499458  994955 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0116 02:35:52.499468  994955 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0116 02:35:52.499480  994955 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0116 02:35:52.499495  994955 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0116 02:35:52.499510  994955 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0116 02:35:52.499526  994955 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0116 02:35:52.499538  994955 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0116 02:35:52.499551  994955 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0116 02:35:52.499567  994955 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0116 02:35:52.499580  994955 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0116 02:35:52.499596  994955 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0116 02:35:52.499605  994955 command_runner.go:130] > # Example:
	I0116 02:35:52.499616  994955 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0116 02:35:52.499626  994955 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0116 02:35:52.499634  994955 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0116 02:35:52.499642  994955 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0116 02:35:52.499653  994955 command_runner.go:130] > # cpuset = 0
	I0116 02:35:52.499663  994955 command_runner.go:130] > # cpushares = "0-1"
	I0116 02:35:52.499672  994955 command_runner.go:130] > # Where:
	I0116 02:35:52.499680  994955 command_runner.go:130] > # The workload name is workload-type.
	I0116 02:35:52.499700  994955 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0116 02:35:52.499711  994955 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0116 02:35:52.499719  994955 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0116 02:35:52.499736  994955 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0116 02:35:52.499749  994955 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0116 02:35:52.499758  994955 command_runner.go:130] > # 
	I0116 02:35:52.499769  994955 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0116 02:35:52.499777  994955 command_runner.go:130] > #
	I0116 02:35:52.499791  994955 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0116 02:35:52.499800  994955 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0116 02:35:52.499813  994955 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0116 02:35:52.499828  994955 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0116 02:35:52.499841  994955 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0116 02:35:52.499850  994955 command_runner.go:130] > [crio.image]
	I0116 02:35:52.499860  994955 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0116 02:35:52.499870  994955 command_runner.go:130] > # default_transport = "docker://"
	I0116 02:35:52.499881  994955 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0116 02:35:52.499892  994955 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0116 02:35:52.499902  994955 command_runner.go:130] > # global_auth_file = ""
	I0116 02:35:52.499915  994955 command_runner.go:130] > # The image used to instantiate infra containers.
	I0116 02:35:52.499926  994955 command_runner.go:130] > # This option supports live configuration reload.
	I0116 02:35:52.499938  994955 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0116 02:35:52.499952  994955 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0116 02:35:52.499964  994955 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0116 02:35:52.499972  994955 command_runner.go:130] > # This option supports live configuration reload.
	I0116 02:35:52.499983  994955 command_runner.go:130] > # pause_image_auth_file = ""
	I0116 02:35:52.499996  994955 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0116 02:35:52.500010  994955 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0116 02:35:52.500023  994955 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0116 02:35:52.500036  994955 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0116 02:35:52.500045  994955 command_runner.go:130] > # pause_command = "/pause"
	I0116 02:35:52.500055  994955 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0116 02:35:52.500068  994955 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0116 02:35:52.500083  994955 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0116 02:35:52.500096  994955 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0116 02:35:52.500108  994955 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0116 02:35:52.500118  994955 command_runner.go:130] > # signature_policy = ""
	I0116 02:35:52.500131  994955 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0116 02:35:52.500140  994955 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0116 02:35:52.500147  994955 command_runner.go:130] > # changing them here.
	I0116 02:35:52.500151  994955 command_runner.go:130] > # insecure_registries = [
	I0116 02:35:52.500157  994955 command_runner.go:130] > # ]
	I0116 02:35:52.500169  994955 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0116 02:35:52.500183  994955 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0116 02:35:52.500193  994955 command_runner.go:130] > # image_volumes = "mkdir"
	I0116 02:35:52.500203  994955 command_runner.go:130] > # Temporary directory to use for storing big files
	I0116 02:35:52.500213  994955 command_runner.go:130] > # big_files_temporary_dir = ""
	I0116 02:35:52.500226  994955 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0116 02:35:52.500235  994955 command_runner.go:130] > # CNI plugins.
	I0116 02:35:52.500242  994955 command_runner.go:130] > [crio.network]
	I0116 02:35:52.500248  994955 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0116 02:35:52.500256  994955 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0116 02:35:52.500262  994955 command_runner.go:130] > # cni_default_network = ""
	I0116 02:35:52.500268  994955 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0116 02:35:52.500275  994955 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0116 02:35:52.500281  994955 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0116 02:35:52.500287  994955 command_runner.go:130] > # plugin_dirs = [
	I0116 02:35:52.500291  994955 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0116 02:35:52.500297  994955 command_runner.go:130] > # ]
	I0116 02:35:52.500303  994955 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0116 02:35:52.500312  994955 command_runner.go:130] > [crio.metrics]
	I0116 02:35:52.500324  994955 command_runner.go:130] > # Globally enable or disable metrics support.
	I0116 02:35:52.500335  994955 command_runner.go:130] > enable_metrics = true
	I0116 02:35:52.500346  994955 command_runner.go:130] > # Specify enabled metrics collectors.
	I0116 02:35:52.500357  994955 command_runner.go:130] > # Per default all metrics are enabled.
	I0116 02:35:52.500371  994955 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0116 02:35:52.500383  994955 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0116 02:35:52.500391  994955 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0116 02:35:52.500398  994955 command_runner.go:130] > # metrics_collectors = [
	I0116 02:35:52.500402  994955 command_runner.go:130] > # 	"operations",
	I0116 02:35:52.500408  994955 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0116 02:35:52.500413  994955 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0116 02:35:52.500419  994955 command_runner.go:130] > # 	"operations_errors",
	I0116 02:35:52.500425  994955 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0116 02:35:52.500431  994955 command_runner.go:130] > # 	"image_pulls_by_name",
	I0116 02:35:52.500436  994955 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0116 02:35:52.500442  994955 command_runner.go:130] > # 	"image_pulls_failures",
	I0116 02:35:52.500447  994955 command_runner.go:130] > # 	"image_pulls_successes",
	I0116 02:35:52.500453  994955 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0116 02:35:52.500457  994955 command_runner.go:130] > # 	"image_layer_reuse",
	I0116 02:35:52.500461  994955 command_runner.go:130] > # 	"containers_oom_total",
	I0116 02:35:52.500468  994955 command_runner.go:130] > # 	"containers_oom",
	I0116 02:35:52.500472  994955 command_runner.go:130] > # 	"processes_defunct",
	I0116 02:35:52.500478  994955 command_runner.go:130] > # 	"operations_total",
	I0116 02:35:52.500482  994955 command_runner.go:130] > # 	"operations_latency_seconds",
	I0116 02:35:52.500489  994955 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0116 02:35:52.500493  994955 command_runner.go:130] > # 	"operations_errors_total",
	I0116 02:35:52.500499  994955 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0116 02:35:52.500504  994955 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0116 02:35:52.500511  994955 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0116 02:35:52.500515  994955 command_runner.go:130] > # 	"image_pulls_success_total",
	I0116 02:35:52.500522  994955 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0116 02:35:52.500527  994955 command_runner.go:130] > # 	"containers_oom_count_total",
	I0116 02:35:52.500533  994955 command_runner.go:130] > # ]
	I0116 02:35:52.500542  994955 command_runner.go:130] > # The port on which the metrics server will listen.
	I0116 02:35:52.500553  994955 command_runner.go:130] > # metrics_port = 9090
	I0116 02:35:52.500562  994955 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0116 02:35:52.500568  994955 command_runner.go:130] > # metrics_socket = ""
	I0116 02:35:52.500574  994955 command_runner.go:130] > # The certificate for the secure metrics server.
	I0116 02:35:52.500580  994955 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0116 02:35:52.500586  994955 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0116 02:35:52.500590  994955 command_runner.go:130] > # certificate on any modification event.
	I0116 02:35:52.500594  994955 command_runner.go:130] > # metrics_cert = ""
	I0116 02:35:52.500599  994955 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0116 02:35:52.500604  994955 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0116 02:35:52.500607  994955 command_runner.go:130] > # metrics_key = ""
	I0116 02:35:52.500613  994955 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0116 02:35:52.500616  994955 command_runner.go:130] > [crio.tracing]
	I0116 02:35:52.500621  994955 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0116 02:35:52.500627  994955 command_runner.go:130] > # enable_tracing = false
	I0116 02:35:52.500632  994955 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0116 02:35:52.500636  994955 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0116 02:35:52.500641  994955 command_runner.go:130] > # Number of samples to collect per million spans.
	I0116 02:35:52.500647  994955 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0116 02:35:52.500652  994955 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0116 02:35:52.500656  994955 command_runner.go:130] > [crio.stats]
	I0116 02:35:52.500661  994955 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0116 02:35:52.500666  994955 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0116 02:35:52.500670  994955 command_runner.go:130] > # stats_collection_period = 0
	I0116 02:35:52.500765  994955 cni.go:84] Creating CNI manager for ""
	I0116 02:35:52.500772  994955 cni.go:136] 3 nodes found, recommending kindnet
	I0116 02:35:52.500798  994955 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0116 02:35:52.500822  994955 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.15 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-835787 NodeName:multinode-835787-m02 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.50"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0116 02:35:52.500959  994955 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-835787-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.50"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0116 02:35:52.501021  994955 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-835787-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-835787 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0116 02:35:52.501071  994955 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0116 02:35:52.510786  994955 command_runner.go:130] > kubeadm
	I0116 02:35:52.510809  994955 command_runner.go:130] > kubectl
	I0116 02:35:52.510815  994955 command_runner.go:130] > kubelet
	I0116 02:35:52.510838  994955 binaries.go:44] Found k8s binaries, skipping transfer
	I0116 02:35:52.510893  994955 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0116 02:35:52.519400  994955 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0116 02:35:52.535195  994955 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0116 02:35:52.551017  994955 ssh_runner.go:195] Run: grep 192.168.39.50	control-plane.minikube.internal$ /etc/hosts
	I0116 02:35:52.554801  994955 command_runner.go:130] > 192.168.39.50	control-plane.minikube.internal
	I0116 02:35:52.555059  994955 host.go:66] Checking if "multinode-835787" exists ...
	I0116 02:35:52.555356  994955 config.go:182] Loaded profile config "multinode-835787": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 02:35:52.555504  994955 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 02:35:52.555552  994955 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:35:52.570862  994955 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37477
	I0116 02:35:52.571352  994955 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:35:52.571880  994955 main.go:141] libmachine: Using API Version  1
	I0116 02:35:52.571906  994955 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:35:52.572264  994955 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:35:52.572489  994955 main.go:141] libmachine: (multinode-835787) Calling .DriverName
	I0116 02:35:52.572676  994955 start.go:304] JoinCluster: &{Name:multinode-835787 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.4 ClusterName:multinode-835787 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.50 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.15 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.123 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false i
ngress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 02:35:52.572792  994955 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0116 02:35:52.572817  994955 main.go:141] libmachine: (multinode-835787) Calling .GetSSHHostname
	I0116 02:35:52.575398  994955 main.go:141] libmachine: (multinode-835787) DBG | domain multinode-835787 has defined MAC address 52:54:00:20:87:3c in network mk-multinode-835787
	I0116 02:35:52.575884  994955 main.go:141] libmachine: (multinode-835787) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:87:3c", ip: ""} in network mk-multinode-835787: {Iface:virbr1 ExpiryTime:2024-01-16 03:33:24 +0000 UTC Type:0 Mac:52:54:00:20:87:3c Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:multinode-835787 Clientid:01:52:54:00:20:87:3c}
	I0116 02:35:52.575916  994955 main.go:141] libmachine: (multinode-835787) DBG | domain multinode-835787 has defined IP address 192.168.39.50 and MAC address 52:54:00:20:87:3c in network mk-multinode-835787
	I0116 02:35:52.576081  994955 main.go:141] libmachine: (multinode-835787) Calling .GetSSHPort
	I0116 02:35:52.576279  994955 main.go:141] libmachine: (multinode-835787) Calling .GetSSHKeyPath
	I0116 02:35:52.576430  994955 main.go:141] libmachine: (multinode-835787) Calling .GetSSHUsername
	I0116 02:35:52.576548  994955 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/multinode-835787/id_rsa Username:docker}
	I0116 02:35:52.756084  994955 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token d94jtd.ldg9g4bj1oh5a18y --discovery-token-ca-cert-hash sha256:2b2d63eda0b757db09586d44c0338fc83976b062d99727312f950d3846e4844b 
	I0116 02:35:52.760623  994955 start.go:317] removing existing worker node "m02" before attempting to rejoin cluster: &{Name:m02 IP:192.168.39.15 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0116 02:35:52.760685  994955 host.go:66] Checking if "multinode-835787" exists ...
	I0116 02:35:52.761019  994955 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 02:35:52.761069  994955 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:35:52.776507  994955 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39271
	I0116 02:35:52.777070  994955 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:35:52.777632  994955 main.go:141] libmachine: Using API Version  1
	I0116 02:35:52.777658  994955 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:35:52.778048  994955 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:35:52.778295  994955 main.go:141] libmachine: (multinode-835787) Calling .DriverName
	I0116 02:35:52.778607  994955 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl drain multinode-835787-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data
	I0116 02:35:52.778632  994955 main.go:141] libmachine: (multinode-835787) Calling .GetSSHHostname
	I0116 02:35:52.781310  994955 main.go:141] libmachine: (multinode-835787) DBG | domain multinode-835787 has defined MAC address 52:54:00:20:87:3c in network mk-multinode-835787
	I0116 02:35:52.781839  994955 main.go:141] libmachine: (multinode-835787) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:87:3c", ip: ""} in network mk-multinode-835787: {Iface:virbr1 ExpiryTime:2024-01-16 03:33:24 +0000 UTC Type:0 Mac:52:54:00:20:87:3c Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:multinode-835787 Clientid:01:52:54:00:20:87:3c}
	I0116 02:35:52.781873  994955 main.go:141] libmachine: (multinode-835787) DBG | domain multinode-835787 has defined IP address 192.168.39.50 and MAC address 52:54:00:20:87:3c in network mk-multinode-835787
	I0116 02:35:52.782005  994955 main.go:141] libmachine: (multinode-835787) Calling .GetSSHPort
	I0116 02:35:52.782215  994955 main.go:141] libmachine: (multinode-835787) Calling .GetSSHKeyPath
	I0116 02:35:52.782401  994955 main.go:141] libmachine: (multinode-835787) Calling .GetSSHUsername
	I0116 02:35:52.782550  994955 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/multinode-835787/id_rsa Username:docker}
	I0116 02:35:52.957265  994955 command_runner.go:130] ! Flag --delete-local-data has been deprecated, This option is deprecated and will be deleted. Use --delete-emptydir-data.
	I0116 02:35:53.025109  994955 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-nllfm, kube-system/kube-proxy-hxx8p
	I0116 02:35:56.045735  994955 command_runner.go:130] > node/multinode-835787-m02 cordoned
	I0116 02:35:56.045770  994955 command_runner.go:130] > pod "busybox-5bc68d56bd-hzzdv" has DeletionTimestamp older than 1 seconds, skipping
	I0116 02:35:56.045788  994955 command_runner.go:130] > node/multinode-835787-m02 drained
	I0116 02:35:56.045831  994955 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl drain multinode-835787-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data: (3.267195201s)
	I0116 02:35:56.045859  994955 node.go:108] successfully drained node "m02"
	I0116 02:35:56.046239  994955 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17967-971255/kubeconfig
	I0116 02:35:56.046455  994955 kapi.go:59] client config for multinode-835787: &rest.Config{Host:"https://192.168.39.50:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17967-971255/.minikube/profiles/multinode-835787/client.crt", KeyFile:"/home/jenkins/minikube-integration/17967-971255/.minikube/profiles/multinode-835787/client.key", CAFile:"/home/jenkins/minikube-integration/17967-971255/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), N
extProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19be0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0116 02:35:56.046962  994955 request.go:1212] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0116 02:35:56.047025  994955 round_trippers.go:463] DELETE https://192.168.39.50:8443/api/v1/nodes/multinode-835787-m02
	I0116 02:35:56.047033  994955 round_trippers.go:469] Request Headers:
	I0116 02:35:56.047041  994955 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:35:56.047047  994955 round_trippers.go:473]     Content-Type: application/json
	I0116 02:35:56.047052  994955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:35:56.058988  994955 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0116 02:35:56.059018  994955 round_trippers.go:577] Response Headers:
	I0116 02:35:56.059030  994955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:35:56.059039  994955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:35:56.059047  994955 round_trippers.go:580]     Content-Length: 171
	I0116 02:35:56.059055  994955 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:35:56 GMT
	I0116 02:35:56.059063  994955 round_trippers.go:580]     Audit-Id: aebe37b0-89a1-4fc2-aef9-b06ab981d65b
	I0116 02:35:56.059069  994955 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:35:56.059079  994955 round_trippers.go:580]     Content-Type: application/json
	I0116 02:35:56.059118  994955 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-835787-m02","kind":"nodes","uid":"c989eb4c-b30d-4969-ac6c-b1c11d2d8a5f"}}
	I0116 02:35:56.059159  994955 node.go:124] successfully deleted node "m02"
	I0116 02:35:56.059173  994955 start.go:321] successfully removed existing worker node "m02" from cluster: &{Name:m02 IP:192.168.39.15 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0116 02:35:56.059202  994955 start.go:325] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.39.15 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0116 02:35:56.059228  994955 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token d94jtd.ldg9g4bj1oh5a18y --discovery-token-ca-cert-hash sha256:2b2d63eda0b757db09586d44c0338fc83976b062d99727312f950d3846e4844b --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-835787-m02"
	I0116 02:35:56.110902  994955 command_runner.go:130] > [preflight] Running pre-flight checks
	I0116 02:35:56.268055  994955 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0116 02:35:56.268096  994955 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0116 02:35:56.335180  994955 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0116 02:35:56.335211  994955 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0116 02:35:56.335219  994955 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0116 02:35:56.490661  994955 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0116 02:35:57.020226  994955 command_runner.go:130] > This node has joined the cluster:
	I0116 02:35:57.020262  994955 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0116 02:35:57.020273  994955 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0116 02:35:57.020284  994955 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0116 02:35:57.023007  994955 command_runner.go:130] ! W0116 02:35:56.104314    2648 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I0116 02:35:57.023038  994955 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I0116 02:35:57.023052  994955 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I0116 02:35:57.023065  994955 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I0116 02:35:57.023093  994955 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0116 02:35:57.316876  994955 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=880ec6d67bbc7fe8882aaa6543d5a07427c973fd minikube.k8s.io/name=multinode-835787 minikube.k8s.io/updated_at=2024_01_16T02_35_57_0700 minikube.k8s.io/primary=false "-l minikube.k8s.io/primary!=true" --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:35:57.429394  994955 command_runner.go:130] > node/multinode-835787-m02 labeled
	I0116 02:35:57.441495  994955 command_runner.go:130] > node/multinode-835787-m03 labeled
	I0116 02:35:57.444469  994955 start.go:306] JoinCluster complete in 4.871784469s
	I0116 02:35:57.444502  994955 cni.go:84] Creating CNI manager for ""
	I0116 02:35:57.444510  994955 cni.go:136] 3 nodes found, recommending kindnet
	I0116 02:35:57.444575  994955 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0116 02:35:57.454645  994955 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0116 02:35:57.454682  994955 command_runner.go:130] >   Size: 2685752   	Blocks: 5248       IO Block: 4096   regular file
	I0116 02:35:57.454692  994955 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I0116 02:35:57.454703  994955 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0116 02:35:57.454719  994955 command_runner.go:130] > Access: 2024-01-16 02:33:25.428611593 +0000
	I0116 02:35:57.454727  994955 command_runner.go:130] > Modify: 2023-12-28 22:53:36.000000000 +0000
	I0116 02:35:57.454735  994955 command_runner.go:130] > Change: 2024-01-16 02:33:23.419611593 +0000
	I0116 02:35:57.454745  994955 command_runner.go:130] >  Birth: -
	I0116 02:35:57.454846  994955 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0116 02:35:57.454867  994955 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0116 02:35:57.479016  994955 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0116 02:35:57.858334  994955 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0116 02:35:57.862474  994955 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0116 02:35:57.867764  994955 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0116 02:35:57.880879  994955 command_runner.go:130] > daemonset.apps/kindnet configured
	I0116 02:35:57.883983  994955 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17967-971255/kubeconfig
	I0116 02:35:57.884336  994955 kapi.go:59] client config for multinode-835787: &rest.Config{Host:"https://192.168.39.50:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17967-971255/.minikube/profiles/multinode-835787/client.crt", KeyFile:"/home/jenkins/minikube-integration/17967-971255/.minikube/profiles/multinode-835787/client.key", CAFile:"/home/jenkins/minikube-integration/17967-971255/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), N
extProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19be0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0116 02:35:57.884841  994955 round_trippers.go:463] GET https://192.168.39.50:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0116 02:35:57.884858  994955 round_trippers.go:469] Request Headers:
	I0116 02:35:57.884867  994955 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:35:57.884876  994955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:35:57.887649  994955 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:35:57.887672  994955 round_trippers.go:577] Response Headers:
	I0116 02:35:57.887683  994955 round_trippers.go:580]     Content-Length: 291
	I0116 02:35:57.887691  994955 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:35:57 GMT
	I0116 02:35:57.887698  994955 round_trippers.go:580]     Audit-Id: ca06cb56-82a3-4628-ac3a-dcef836bc91b
	I0116 02:35:57.887707  994955 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:35:57.887716  994955 round_trippers.go:580]     Content-Type: application/json
	I0116 02:35:57.887726  994955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:35:57.887736  994955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:35:57.887770  994955 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"c3d1d02d-1d3d-4837-b3ba-04423f0d8104","resourceVersion":"927","creationTimestamp":"2024-01-16T02:23:32Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0116 02:35:57.887898  994955 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-835787" context rescaled to 1 replicas
	I0116 02:35:57.887967  994955 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.39.15 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0116 02:35:57.890022  994955 out.go:177] * Verifying Kubernetes components...
	I0116 02:35:57.891450  994955 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 02:35:57.930681  994955 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17967-971255/kubeconfig
	I0116 02:35:57.930994  994955 kapi.go:59] client config for multinode-835787: &rest.Config{Host:"https://192.168.39.50:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17967-971255/.minikube/profiles/multinode-835787/client.crt", KeyFile:"/home/jenkins/minikube-integration/17967-971255/.minikube/profiles/multinode-835787/client.key", CAFile:"/home/jenkins/minikube-integration/17967-971255/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), N
extProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19be0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0116 02:35:57.931292  994955 node_ready.go:35] waiting up to 6m0s for node "multinode-835787-m02" to be "Ready" ...
	I0116 02:35:57.931398  994955 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-835787-m02
	I0116 02:35:57.931408  994955 round_trippers.go:469] Request Headers:
	I0116 02:35:57.931418  994955 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:35:57.931427  994955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:35:57.934412  994955 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:35:57.934447  994955 round_trippers.go:577] Response Headers:
	I0116 02:35:57.934460  994955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:35:57.934469  994955 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:35:57 GMT
	I0116 02:35:57.934477  994955 round_trippers.go:580]     Audit-Id: 90ec9dde-a3ad-475f-859f-fa631f3119db
	I0116 02:35:57.934486  994955 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:35:57.934494  994955 round_trippers.go:580]     Content-Type: application/json
	I0116 02:35:57.934503  994955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:35:57.934628  994955 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-835787-m02","uid":"7ba249a5-ba94-4ff4-a7a8-df4d380c08dc","resourceVersion":"1073","creationTimestamp":"2024-01-16T02:35:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-835787-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-835787","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T02_35_57_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:35:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta
data":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec": [truncated 3993 chars]
	I0116 02:35:57.935011  994955 node_ready.go:49] node "multinode-835787-m02" has status "Ready":"True"
	I0116 02:35:57.935039  994955 node_ready.go:38] duration metric: took 3.72772ms waiting for node "multinode-835787-m02" to be "Ready" ...
	I0116 02:35:57.935053  994955 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 02:35:57.935150  994955 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods
	I0116 02:35:57.935164  994955 round_trippers.go:469] Request Headers:
	I0116 02:35:57.935178  994955 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:35:57.935190  994955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:35:57.939556  994955 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0116 02:35:57.939580  994955 round_trippers.go:577] Response Headers:
	I0116 02:35:57.939591  994955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:35:57.939599  994955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:35:57.939607  994955 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:35:57 GMT
	I0116 02:35:57.939617  994955 round_trippers.go:580]     Audit-Id: 23fcafa8-90aa-4e27-9586-59fe8235c495
	I0116 02:35:57.939631  994955 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:35:57.939637  994955 round_trippers.go:580]     Content-Type: application/json
	I0116 02:35:57.940716  994955 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1083"},"items":[{"metadata":{"name":"coredns-5dd5756b68-965sn","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"a0898f09-1a64-4beb-bfbf-de15f2e07038","resourceVersion":"922","creationTimestamp":"2024-01-16T02:23:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a765b12d-1df2-4d20-a0f7-7471371f2fe7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:23:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a765b12d-1df2-4d20-a0f7-7471371f2fe7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"
f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers": [truncated 82198 chars]
	I0116 02:35:57.943794  994955 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-965sn" in "kube-system" namespace to be "Ready" ...
	I0116 02:35:57.943912  994955 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-965sn
	I0116 02:35:57.943922  994955 round_trippers.go:469] Request Headers:
	I0116 02:35:57.943930  994955 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:35:57.943935  994955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:35:57.947260  994955 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 02:35:57.947284  994955 round_trippers.go:577] Response Headers:
	I0116 02:35:57.947294  994955 round_trippers.go:580]     Audit-Id: e63782e7-7e1c-453a-a57e-374e3ba391d8
	I0116 02:35:57.947304  994955 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:35:57.947313  994955 round_trippers.go:580]     Content-Type: application/json
	I0116 02:35:57.947334  994955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:35:57.947343  994955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:35:57.947354  994955 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:35:57 GMT
	I0116 02:35:57.947789  994955 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-965sn","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"a0898f09-1a64-4beb-bfbf-de15f2e07038","resourceVersion":"922","creationTimestamp":"2024-01-16T02:23:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a765b12d-1df2-4d20-a0f7-7471371f2fe7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:23:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a765b12d-1df2-4d20-a0f7-7471371f2fe7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6264 chars]
	I0116 02:35:57.948229  994955 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-835787
	I0116 02:35:57.948249  994955 round_trippers.go:469] Request Headers:
	I0116 02:35:57.948259  994955 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:35:57.948268  994955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:35:57.951988  994955 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 02:35:57.952008  994955 round_trippers.go:577] Response Headers:
	I0116 02:35:57.952016  994955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:35:57.952025  994955 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:35:57 GMT
	I0116 02:35:57.952034  994955 round_trippers.go:580]     Audit-Id: e8db3fc7-4590-4912-bd6c-5f144abdd022
	I0116 02:35:57.952043  994955 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:35:57.952051  994955 round_trippers.go:580]     Content-Type: application/json
	I0116 02:35:57.952058  994955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:35:57.952250  994955 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-835787","uid":"7ae74749-584e-4cbc-92c4-0e5f2539761e","resourceVersion":"934","creationTimestamp":"2024-01-16T02:23:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-835787","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-835787","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_23_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:23:29Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I0116 02:35:57.952699  994955 pod_ready.go:92] pod "coredns-5dd5756b68-965sn" in "kube-system" namespace has status "Ready":"True"
	I0116 02:35:57.952720  994955 pod_ready.go:81] duration metric: took 8.897705ms waiting for pod "coredns-5dd5756b68-965sn" in "kube-system" namespace to be "Ready" ...
	I0116 02:35:57.952730  994955 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-835787" in "kube-system" namespace to be "Ready" ...
	I0116 02:35:57.952804  994955 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-835787
	I0116 02:35:57.952813  994955 round_trippers.go:469] Request Headers:
	I0116 02:35:57.952821  994955 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:35:57.952827  994955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:35:57.955836  994955 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:35:57.955863  994955 round_trippers.go:577] Response Headers:
	I0116 02:35:57.955873  994955 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:35:57.955881  994955 round_trippers.go:580]     Content-Type: application/json
	I0116 02:35:57.955889  994955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:35:57.955897  994955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:35:57.955905  994955 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:35:57 GMT
	I0116 02:35:57.955912  994955 round_trippers.go:580]     Audit-Id: e9061453-9e10-4ca2-849e-0160dde71d96
	I0116 02:35:57.956191  994955 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-835787","namespace":"kube-system","uid":"ccb51de1-d565-42b0-bd30-9b1acb1c725d","resourceVersion":"879","creationTimestamp":"2024-01-16T02:23:33Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.50:2379","kubernetes.io/config.hash":"108085f55363e386b9f9c083ac579444","kubernetes.io/config.mirror":"108085f55363e386b9f9c083ac579444","kubernetes.io/config.seen":"2024-01-16T02:23:33.032941198Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-835787","uid":"7ae74749-584e-4cbc-92c4-0e5f2539761e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:23:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 5843 chars]
	I0116 02:35:57.956704  994955 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-835787
	I0116 02:35:57.956720  994955 round_trippers.go:469] Request Headers:
	I0116 02:35:57.956728  994955 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:35:57.956734  994955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:35:57.959125  994955 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:35:57.959145  994955 round_trippers.go:577] Response Headers:
	I0116 02:35:57.959152  994955 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:35:57 GMT
	I0116 02:35:57.959158  994955 round_trippers.go:580]     Audit-Id: 2c276b4a-fb43-45cd-9182-84755614d1cb
	I0116 02:35:57.959163  994955 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:35:57.959168  994955 round_trippers.go:580]     Content-Type: application/json
	I0116 02:35:57.959173  994955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:35:57.959179  994955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:35:57.959367  994955 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-835787","uid":"7ae74749-584e-4cbc-92c4-0e5f2539761e","resourceVersion":"934","creationTimestamp":"2024-01-16T02:23:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-835787","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-835787","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_23_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:23:29Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I0116 02:35:57.959819  994955 pod_ready.go:92] pod "etcd-multinode-835787" in "kube-system" namespace has status "Ready":"True"
	I0116 02:35:57.959840  994955 pod_ready.go:81] duration metric: took 7.103575ms waiting for pod "etcd-multinode-835787" in "kube-system" namespace to be "Ready" ...
	I0116 02:35:57.959858  994955 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-835787" in "kube-system" namespace to be "Ready" ...
	I0116 02:35:57.959917  994955 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-835787
	I0116 02:35:57.959925  994955 round_trippers.go:469] Request Headers:
	I0116 02:35:57.959933  994955 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:35:57.959939  994955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:35:57.962544  994955 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:35:57.962566  994955 round_trippers.go:577] Response Headers:
	I0116 02:35:57.962577  994955 round_trippers.go:580]     Audit-Id: 9730b5d8-1db2-4ff1-840c-574c654203f6
	I0116 02:35:57.962585  994955 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:35:57.962593  994955 round_trippers.go:580]     Content-Type: application/json
	I0116 02:35:57.962601  994955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:35:57.962610  994955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:35:57.962622  994955 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:35:57 GMT
	I0116 02:35:57.962784  994955 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-835787","namespace":"kube-system","uid":"9c26db11-7208-4540-8a73-407a6edd3a0b","resourceVersion":"893","creationTimestamp":"2024-01-16T02:23:33Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.50:8443","kubernetes.io/config.hash":"b27880b6b81ca11dc023b4901941ff6f","kubernetes.io/config.mirror":"b27880b6b81ca11dc023b4901941ff6f","kubernetes.io/config.seen":"2024-01-16T02:23:33.032945135Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-835787","uid":"7ae74749-584e-4cbc-92c4-0e5f2539761e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:23:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7380 chars]
	I0116 02:35:57.963341  994955 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-835787
	I0116 02:35:57.963361  994955 round_trippers.go:469] Request Headers:
	I0116 02:35:57.963372  994955 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:35:57.963382  994955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:35:57.965639  994955 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:35:57.965657  994955 round_trippers.go:577] Response Headers:
	I0116 02:35:57.965666  994955 round_trippers.go:580]     Content-Type: application/json
	I0116 02:35:57.965674  994955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:35:57.965685  994955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:35:57.965693  994955 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:35:57 GMT
	I0116 02:35:57.965707  994955 round_trippers.go:580]     Audit-Id: b6a63bb7-7823-4247-a163-d3785b6a78dc
	I0116 02:35:57.965720  994955 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:35:57.965997  994955 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-835787","uid":"7ae74749-584e-4cbc-92c4-0e5f2539761e","resourceVersion":"934","creationTimestamp":"2024-01-16T02:23:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-835787","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-835787","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_23_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:23:29Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I0116 02:35:57.966357  994955 pod_ready.go:92] pod "kube-apiserver-multinode-835787" in "kube-system" namespace has status "Ready":"True"
	I0116 02:35:57.966378  994955 pod_ready.go:81] duration metric: took 6.510617ms waiting for pod "kube-apiserver-multinode-835787" in "kube-system" namespace to be "Ready" ...
	I0116 02:35:57.966389  994955 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-835787" in "kube-system" namespace to be "Ready" ...
	I0116 02:35:57.966463  994955 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-835787
	I0116 02:35:57.966472  994955 round_trippers.go:469] Request Headers:
	I0116 02:35:57.966482  994955 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:35:57.966494  994955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:35:57.968948  994955 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:35:57.968975  994955 round_trippers.go:577] Response Headers:
	I0116 02:35:57.968985  994955 round_trippers.go:580]     Audit-Id: 51211b3c-2731-422c-bd9b-166ebb8bcae7
	I0116 02:35:57.968992  994955 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:35:57.968999  994955 round_trippers.go:580]     Content-Type: application/json
	I0116 02:35:57.969005  994955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:35:57.969011  994955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:35:57.969016  994955 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:35:57 GMT
	I0116 02:35:57.969175  994955 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-835787","namespace":"kube-system","uid":"daf9e312-54ad-4a4e-b334-9b84e55f8fef","resourceVersion":"885","creationTimestamp":"2024-01-16T02:23:33Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"6adb137abb6e7ac4dcf8e50e41a3773b","kubernetes.io/config.mirror":"6adb137abb6e7ac4dcf8e50e41a3773b","kubernetes.io/config.seen":"2024-01-16T02:23:33.032946146Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-835787","uid":"7ae74749-584e-4cbc-92c4-0e5f2539761e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:23:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6950 chars]
	I0116 02:35:57.969672  994955 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-835787
	I0116 02:35:57.969744  994955 round_trippers.go:469] Request Headers:
	I0116 02:35:57.969764  994955 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:35:57.969774  994955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:35:57.971964  994955 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:35:57.971986  994955 round_trippers.go:577] Response Headers:
	I0116 02:35:57.971995  994955 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:35:57 GMT
	I0116 02:35:57.972005  994955 round_trippers.go:580]     Audit-Id: 0472e042-b706-4351-9da7-34016e33f3f5
	I0116 02:35:57.972016  994955 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:35:57.972038  994955 round_trippers.go:580]     Content-Type: application/json
	I0116 02:35:57.972046  994955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:35:57.972054  994955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:35:57.972186  994955 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-835787","uid":"7ae74749-584e-4cbc-92c4-0e5f2539761e","resourceVersion":"934","creationTimestamp":"2024-01-16T02:23:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-835787","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-835787","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_23_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:23:29Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I0116 02:35:57.972648  994955 pod_ready.go:92] pod "kube-controller-manager-multinode-835787" in "kube-system" namespace has status "Ready":"True"
	I0116 02:35:57.972670  994955 pod_ready.go:81] duration metric: took 6.273424ms waiting for pod "kube-controller-manager-multinode-835787" in "kube-system" namespace to be "Ready" ...
	I0116 02:35:57.972679  994955 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fpdqr" in "kube-system" namespace to be "Ready" ...
	I0116 02:35:58.132138  994955 request.go:629] Waited for 159.357163ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fpdqr
	I0116 02:35:58.132213  994955 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fpdqr
	I0116 02:35:58.132218  994955 round_trippers.go:469] Request Headers:
	I0116 02:35:58.132226  994955 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:35:58.132233  994955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:35:58.136554  994955 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0116 02:35:58.136603  994955 round_trippers.go:577] Response Headers:
	I0116 02:35:58.136618  994955 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:35:58.136628  994955 round_trippers.go:580]     Content-Type: application/json
	I0116 02:35:58.136640  994955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:35:58.136650  994955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:35:58.136658  994955 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:35:58 GMT
	I0116 02:35:58.136666  994955 round_trippers.go:580]     Audit-Id: fb9154d5-fc04-4404-b18f-1b67d523721d
	I0116 02:35:58.136807  994955 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-fpdqr","generateName":"kube-proxy-","namespace":"kube-system","uid":"42b74cbd-93d8-4ac7-9071-112d5e7c572b","resourceVersion":"733","creationTimestamp":"2024-01-16T02:25:22Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"c1b2dbfd-aa15-48f4-b587-dbae275fb5c5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:25:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c1b2dbfd-aa15-48f4-b587-dbae275fb5c5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5526 chars]
	I0116 02:35:58.331712  994955 request.go:629] Waited for 194.329331ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.50:8443/api/v1/nodes/multinode-835787-m03
	I0116 02:35:58.331806  994955 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-835787-m03
	I0116 02:35:58.331815  994955 round_trippers.go:469] Request Headers:
	I0116 02:35:58.331826  994955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:35:58.331837  994955 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:35:58.334592  994955 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:35:58.334623  994955 round_trippers.go:577] Response Headers:
	I0116 02:35:58.334632  994955 round_trippers.go:580]     Audit-Id: bf2d5a1a-38ec-4fbc-9f94-649a71b9322c
	I0116 02:35:58.334641  994955 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:35:58.334650  994955 round_trippers.go:580]     Content-Type: application/json
	I0116 02:35:58.334658  994955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:35:58.334664  994955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:35:58.334684  994955 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:35:58 GMT
	I0116 02:35:58.334815  994955 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-835787-m03","uid":"67df5a31-bd76-4643-b628-d7570878cf19","resourceVersion":"1074","creationTimestamp":"2024-01-16T02:26:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-835787-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-835787","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T02_35_57_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:26:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annota
tions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detac [truncated 3966 chars]
	I0116 02:35:58.335142  994955 pod_ready.go:92] pod "kube-proxy-fpdqr" in "kube-system" namespace has status "Ready":"True"
	I0116 02:35:58.335163  994955 pod_ready.go:81] duration metric: took 362.477958ms waiting for pod "kube-proxy-fpdqr" in "kube-system" namespace to be "Ready" ...
	I0116 02:35:58.335174  994955 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-gbvc2" in "kube-system" namespace to be "Ready" ...
	I0116 02:35:58.532234  994955 request.go:629] Waited for 196.920223ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gbvc2
	I0116 02:35:58.532317  994955 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gbvc2
	I0116 02:35:58.532360  994955 round_trippers.go:469] Request Headers:
	I0116 02:35:58.532376  994955 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:35:58.532387  994955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:35:58.535233  994955 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:35:58.535263  994955 round_trippers.go:577] Response Headers:
	I0116 02:35:58.535273  994955 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:35:58 GMT
	I0116 02:35:58.535283  994955 round_trippers.go:580]     Audit-Id: d9d525f0-946e-461c-9fff-420daec27b41
	I0116 02:35:58.535291  994955 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:35:58.535299  994955 round_trippers.go:580]     Content-Type: application/json
	I0116 02:35:58.535307  994955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:35:58.535317  994955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:35:58.535471  994955 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-gbvc2","generateName":"kube-proxy-","namespace":"kube-system","uid":"74d63696-cb46-484d-937b-8883e6f1df06","resourceVersion":"824","creationTimestamp":"2024-01-16T02:23:45Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"c1b2dbfd-aa15-48f4-b587-dbae275fb5c5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:23:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c1b2dbfd-aa15-48f4-b587-dbae275fb5c5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5514 chars]
	I0116 02:35:58.731957  994955 request.go:629] Waited for 196.027381ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.50:8443/api/v1/nodes/multinode-835787
	I0116 02:35:58.732047  994955 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-835787
	I0116 02:35:58.732059  994955 round_trippers.go:469] Request Headers:
	I0116 02:35:58.732070  994955 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:35:58.732085  994955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:35:58.740349  994955 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0116 02:35:58.740387  994955 round_trippers.go:577] Response Headers:
	I0116 02:35:58.740398  994955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:35:58.740407  994955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:35:58.740416  994955 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:35:58 GMT
	I0116 02:35:58.740424  994955 round_trippers.go:580]     Audit-Id: f7918a2d-a430-4cd1-90e1-6a286997263f
	I0116 02:35:58.740432  994955 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:35:58.740440  994955 round_trippers.go:580]     Content-Type: application/json
	I0116 02:35:58.740659  994955 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-835787","uid":"7ae74749-584e-4cbc-92c4-0e5f2539761e","resourceVersion":"934","creationTimestamp":"2024-01-16T02:23:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-835787","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-835787","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_23_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:23:29Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I0116 02:35:58.741018  994955 pod_ready.go:92] pod "kube-proxy-gbvc2" in "kube-system" namespace has status "Ready":"True"
	I0116 02:35:58.741040  994955 pod_ready.go:81] duration metric: took 405.858845ms waiting for pod "kube-proxy-gbvc2" in "kube-system" namespace to be "Ready" ...
	I0116 02:35:58.741055  994955 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hxx8p" in "kube-system" namespace to be "Ready" ...
	I0116 02:35:58.931961  994955 request.go:629] Waited for 190.803319ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hxx8p
	I0116 02:35:58.932037  994955 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hxx8p
	I0116 02:35:58.932052  994955 round_trippers.go:469] Request Headers:
	I0116 02:35:58.932068  994955 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:35:58.932082  994955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:35:58.935144  994955 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 02:35:58.935167  994955 round_trippers.go:577] Response Headers:
	I0116 02:35:58.935178  994955 round_trippers.go:580]     Audit-Id: 478fb9c7-a53d-4301-abc7-d9639da54c69
	I0116 02:35:58.935185  994955 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:35:58.935191  994955 round_trippers.go:580]     Content-Type: application/json
	I0116 02:35:58.935199  994955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:35:58.935207  994955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:35:58.935215  994955 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:35:58 GMT
	I0116 02:35:58.935418  994955 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-hxx8p","generateName":"kube-proxy-","namespace":"kube-system","uid":"9c35aa68-14ac-41e1-81f8-8fdb0c48d9f1","resourceVersion":"1091","creationTimestamp":"2024-01-16T02:24:32Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"c1b2dbfd-aa15-48f4-b587-dbae275fb5c5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:24:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c1b2dbfd-aa15-48f4-b587-dbae275fb5c5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5727 chars]
	I0116 02:35:59.132401  994955 request.go:629] Waited for 196.380782ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.50:8443/api/v1/nodes/multinode-835787-m02
	I0116 02:35:59.132490  994955 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-835787-m02
	I0116 02:35:59.132498  994955 round_trippers.go:469] Request Headers:
	I0116 02:35:59.132514  994955 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:35:59.132527  994955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:35:59.135448  994955 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:35:59.135473  994955 round_trippers.go:577] Response Headers:
	I0116 02:35:59.135480  994955 round_trippers.go:580]     Audit-Id: dc4a3ef1-ed51-4a50-89b8-8e2b78fb1daf
	I0116 02:35:59.135487  994955 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:35:59.135495  994955 round_trippers.go:580]     Content-Type: application/json
	I0116 02:35:59.135504  994955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:35:59.135514  994955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:35:59.135523  994955 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:35:59 GMT
	I0116 02:35:59.135669  994955 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-835787-m02","uid":"7ba249a5-ba94-4ff4-a7a8-df4d380c08dc","resourceVersion":"1073","creationTimestamp":"2024-01-16T02:35:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-835787-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-835787","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T02_35_57_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:35:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta
data":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec": [truncated 3993 chars]
	I0116 02:35:59.136036  994955 pod_ready.go:92] pod "kube-proxy-hxx8p" in "kube-system" namespace has status "Ready":"True"
	I0116 02:35:59.136061  994955 pod_ready.go:81] duration metric: took 394.996222ms waiting for pod "kube-proxy-hxx8p" in "kube-system" namespace to be "Ready" ...
	I0116 02:35:59.136074  994955 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-835787" in "kube-system" namespace to be "Ready" ...
	I0116 02:35:59.331616  994955 request.go:629] Waited for 195.447708ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-835787
	I0116 02:35:59.331711  994955 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-835787
	I0116 02:35:59.331722  994955 round_trippers.go:469] Request Headers:
	I0116 02:35:59.331737  994955 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:35:59.331751  994955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:35:59.335489  994955 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 02:35:59.335520  994955 round_trippers.go:577] Response Headers:
	I0116 02:35:59.335531  994955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:35:59.335540  994955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:35:59.335548  994955 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:35:59 GMT
	I0116 02:35:59.335556  994955 round_trippers.go:580]     Audit-Id: c76846dd-5a83-47c4-a186-b32c8677f2d4
	I0116 02:35:59.335563  994955 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:35:59.335571  994955 round_trippers.go:580]     Content-Type: application/json
	I0116 02:35:59.335785  994955 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-835787","namespace":"kube-system","uid":"7b9c28cc-6e78-413a-af72-511714d2462e","resourceVersion":"908","creationTimestamp":"2024-01-16T02:23:33Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"230f2dad53142209ac2ae48ed27aa7b4","kubernetes.io/config.mirror":"230f2dad53142209ac2ae48ed27aa7b4","kubernetes.io/config.seen":"2024-01-16T02:23:33.032947019Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-835787","uid":"7ae74749-584e-4cbc-92c4-0e5f2539761e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:23:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4680 chars]
	I0116 02:35:59.531436  994955 request.go:629] Waited for 195.167245ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.50:8443/api/v1/nodes/multinode-835787
	I0116 02:35:59.531521  994955 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-835787
	I0116 02:35:59.531584  994955 round_trippers.go:469] Request Headers:
	I0116 02:35:59.531601  994955 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:35:59.531608  994955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:35:59.534448  994955 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:35:59.534474  994955 round_trippers.go:577] Response Headers:
	I0116 02:35:59.534485  994955 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:35:59 GMT
	I0116 02:35:59.534494  994955 round_trippers.go:580]     Audit-Id: 76be1682-12e1-4e1f-bdad-f71771144218
	I0116 02:35:59.534502  994955 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:35:59.534510  994955 round_trippers.go:580]     Content-Type: application/json
	I0116 02:35:59.534519  994955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:35:59.534527  994955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:35:59.534728  994955 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-835787","uid":"7ae74749-584e-4cbc-92c4-0e5f2539761e","resourceVersion":"934","creationTimestamp":"2024-01-16T02:23:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-835787","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-835787","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_23_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:23:29Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I0116 02:35:59.535086  994955 pod_ready.go:92] pod "kube-scheduler-multinode-835787" in "kube-system" namespace has status "Ready":"True"
	I0116 02:35:59.535110  994955 pod_ready.go:81] duration metric: took 399.027385ms waiting for pod "kube-scheduler-multinode-835787" in "kube-system" namespace to be "Ready" ...
	I0116 02:35:59.535125  994955 pod_ready.go:38] duration metric: took 1.600056573s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 02:35:59.535151  994955 system_svc.go:44] waiting for kubelet service to be running ....
	I0116 02:35:59.535208  994955 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 02:35:59.549653  994955 system_svc.go:56] duration metric: took 14.493081ms WaitForService to wait for kubelet.
	I0116 02:35:59.549685  994955 kubeadm.go:581] duration metric: took 1.66168329s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0116 02:35:59.549707  994955 node_conditions.go:102] verifying NodePressure condition ...
	I0116 02:35:59.732140  994955 request.go:629] Waited for 182.349325ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.50:8443/api/v1/nodes
	I0116 02:35:59.732217  994955 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes
	I0116 02:35:59.732224  994955 round_trippers.go:469] Request Headers:
	I0116 02:35:59.732237  994955 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:35:59.732249  994955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:35:59.735525  994955 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 02:35:59.735560  994955 round_trippers.go:577] Response Headers:
	I0116 02:35:59.735570  994955 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:35:59 GMT
	I0116 02:35:59.735577  994955 round_trippers.go:580]     Audit-Id: 9657cb37-35d2-4c7b-8d83-0bcbe31ae6a0
	I0116 02:35:59.735585  994955 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:35:59.735597  994955 round_trippers.go:580]     Content-Type: application/json
	I0116 02:35:59.735619  994955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:35:59.735629  994955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:35:59.736393  994955 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1096"},"items":[{"metadata":{"name":"multinode-835787","uid":"7ae74749-584e-4cbc-92c4-0e5f2539761e","resourceVersion":"934","creationTimestamp":"2024-01-16T02:23:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-835787","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-835787","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_23_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedField
s":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time": [truncated 16209 chars]
	I0116 02:35:59.737038  994955 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 02:35:59.737065  994955 node_conditions.go:123] node cpu capacity is 2
	I0116 02:35:59.737077  994955 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 02:35:59.737084  994955 node_conditions.go:123] node cpu capacity is 2
	I0116 02:35:59.737089  994955 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 02:35:59.737096  994955 node_conditions.go:123] node cpu capacity is 2
	I0116 02:35:59.737104  994955 node_conditions.go:105] duration metric: took 187.391057ms to run NodePressure ...
	I0116 02:35:59.737122  994955 start.go:228] waiting for startup goroutines ...
	I0116 02:35:59.737156  994955 start.go:242] writing updated cluster config ...
	I0116 02:35:59.737669  994955 config.go:182] Loaded profile config "multinode-835787": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 02:35:59.737781  994955 profile.go:148] Saving config to /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/multinode-835787/config.json ...
	I0116 02:35:59.741545  994955 out.go:177] * Starting worker node multinode-835787-m03 in cluster multinode-835787
	I0116 02:35:59.742881  994955 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0116 02:35:59.742909  994955 cache.go:56] Caching tarball of preloaded images
	I0116 02:35:59.743016  994955 preload.go:174] Found /home/jenkins/minikube-integration/17967-971255/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0116 02:35:59.743038  994955 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0116 02:35:59.743144  994955 profile.go:148] Saving config to /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/multinode-835787/config.json ...
	I0116 02:35:59.743343  994955 start.go:365] acquiring machines lock for multinode-835787-m03: {Name:mk61734672272adcae1bdaf20e72828e69d219db Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0116 02:35:59.743405  994955 start.go:369] acquired machines lock for "multinode-835787-m03" in 38.446µs
	I0116 02:35:59.743427  994955 start.go:96] Skipping create...Using existing machine configuration
	I0116 02:35:59.743438  994955 fix.go:54] fixHost starting: m03
	I0116 02:35:59.743708  994955 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 02:35:59.743753  994955 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:35:59.758884  994955 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42195
	I0116 02:35:59.759310  994955 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:35:59.759835  994955 main.go:141] libmachine: Using API Version  1
	I0116 02:35:59.759862  994955 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:35:59.760197  994955 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:35:59.760421  994955 main.go:141] libmachine: (multinode-835787-m03) Calling .DriverName
	I0116 02:35:59.760598  994955 main.go:141] libmachine: (multinode-835787-m03) Calling .GetState
	I0116 02:35:59.762321  994955 fix.go:102] recreateIfNeeded on multinode-835787-m03: state=Running err=<nil>
	W0116 02:35:59.762340  994955 fix.go:128] unexpected machine state, will restart: <nil>
	I0116 02:35:59.764258  994955 out.go:177] * Updating the running kvm2 "multinode-835787-m03" VM ...
	I0116 02:35:59.765438  994955 machine.go:88] provisioning docker machine ...
	I0116 02:35:59.765459  994955 main.go:141] libmachine: (multinode-835787-m03) Calling .DriverName
	I0116 02:35:59.765706  994955 main.go:141] libmachine: (multinode-835787-m03) Calling .GetMachineName
	I0116 02:35:59.765887  994955 buildroot.go:166] provisioning hostname "multinode-835787-m03"
	I0116 02:35:59.765906  994955 main.go:141] libmachine: (multinode-835787-m03) Calling .GetMachineName
	I0116 02:35:59.766043  994955 main.go:141] libmachine: (multinode-835787-m03) Calling .GetSSHHostname
	I0116 02:35:59.768275  994955 main.go:141] libmachine: (multinode-835787-m03) DBG | domain multinode-835787-m03 has defined MAC address 52:54:00:53:5b:60 in network mk-multinode-835787
	I0116 02:35:59.768740  994955 main.go:141] libmachine: (multinode-835787-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:5b:60", ip: ""} in network mk-multinode-835787: {Iface:virbr1 ExpiryTime:2024-01-16 03:25:57 +0000 UTC Type:0 Mac:52:54:00:53:5b:60 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:multinode-835787-m03 Clientid:01:52:54:00:53:5b:60}
	I0116 02:35:59.768763  994955 main.go:141] libmachine: (multinode-835787-m03) DBG | domain multinode-835787-m03 has defined IP address 192.168.39.123 and MAC address 52:54:00:53:5b:60 in network mk-multinode-835787
	I0116 02:35:59.768924  994955 main.go:141] libmachine: (multinode-835787-m03) Calling .GetSSHPort
	I0116 02:35:59.769076  994955 main.go:141] libmachine: (multinode-835787-m03) Calling .GetSSHKeyPath
	I0116 02:35:59.769211  994955 main.go:141] libmachine: (multinode-835787-m03) Calling .GetSSHKeyPath
	I0116 02:35:59.769329  994955 main.go:141] libmachine: (multinode-835787-m03) Calling .GetSSHUsername
	I0116 02:35:59.769492  994955 main.go:141] libmachine: Using SSH client type: native
	I0116 02:35:59.769840  994955 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.123 22 <nil> <nil>}
	I0116 02:35:59.769855  994955 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-835787-m03 && echo "multinode-835787-m03" | sudo tee /etc/hostname
	I0116 02:35:59.912370  994955 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-835787-m03
	
	I0116 02:35:59.912411  994955 main.go:141] libmachine: (multinode-835787-m03) Calling .GetSSHHostname
	I0116 02:35:59.915858  994955 main.go:141] libmachine: (multinode-835787-m03) DBG | domain multinode-835787-m03 has defined MAC address 52:54:00:53:5b:60 in network mk-multinode-835787
	I0116 02:35:59.916313  994955 main.go:141] libmachine: (multinode-835787-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:5b:60", ip: ""} in network mk-multinode-835787: {Iface:virbr1 ExpiryTime:2024-01-16 03:25:57 +0000 UTC Type:0 Mac:52:54:00:53:5b:60 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:multinode-835787-m03 Clientid:01:52:54:00:53:5b:60}
	I0116 02:35:59.916345  994955 main.go:141] libmachine: (multinode-835787-m03) DBG | domain multinode-835787-m03 has defined IP address 192.168.39.123 and MAC address 52:54:00:53:5b:60 in network mk-multinode-835787
	I0116 02:35:59.916505  994955 main.go:141] libmachine: (multinode-835787-m03) Calling .GetSSHPort
	I0116 02:35:59.916723  994955 main.go:141] libmachine: (multinode-835787-m03) Calling .GetSSHKeyPath
	I0116 02:35:59.916928  994955 main.go:141] libmachine: (multinode-835787-m03) Calling .GetSSHKeyPath
	I0116 02:35:59.917089  994955 main.go:141] libmachine: (multinode-835787-m03) Calling .GetSSHUsername
	I0116 02:35:59.917292  994955 main.go:141] libmachine: Using SSH client type: native
	I0116 02:35:59.917780  994955 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.123 22 <nil> <nil>}
	I0116 02:35:59.917820  994955 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-835787-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-835787-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-835787-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0116 02:36:00.046910  994955 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0116 02:36:00.046944  994955 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17967-971255/.minikube CaCertPath:/home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17967-971255/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17967-971255/.minikube}
	I0116 02:36:00.046970  994955 buildroot.go:174] setting up certificates
	I0116 02:36:00.046984  994955 provision.go:83] configureAuth start
	I0116 02:36:00.046995  994955 main.go:141] libmachine: (multinode-835787-m03) Calling .GetMachineName
	I0116 02:36:00.047321  994955 main.go:141] libmachine: (multinode-835787-m03) Calling .GetIP
	I0116 02:36:00.050178  994955 main.go:141] libmachine: (multinode-835787-m03) DBG | domain multinode-835787-m03 has defined MAC address 52:54:00:53:5b:60 in network mk-multinode-835787
	I0116 02:36:00.050542  994955 main.go:141] libmachine: (multinode-835787-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:5b:60", ip: ""} in network mk-multinode-835787: {Iface:virbr1 ExpiryTime:2024-01-16 03:25:57 +0000 UTC Type:0 Mac:52:54:00:53:5b:60 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:multinode-835787-m03 Clientid:01:52:54:00:53:5b:60}
	I0116 02:36:00.050567  994955 main.go:141] libmachine: (multinode-835787-m03) DBG | domain multinode-835787-m03 has defined IP address 192.168.39.123 and MAC address 52:54:00:53:5b:60 in network mk-multinode-835787
	I0116 02:36:00.050735  994955 main.go:141] libmachine: (multinode-835787-m03) Calling .GetSSHHostname
	I0116 02:36:00.053047  994955 main.go:141] libmachine: (multinode-835787-m03) DBG | domain multinode-835787-m03 has defined MAC address 52:54:00:53:5b:60 in network mk-multinode-835787
	I0116 02:36:00.053409  994955 main.go:141] libmachine: (multinode-835787-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:5b:60", ip: ""} in network mk-multinode-835787: {Iface:virbr1 ExpiryTime:2024-01-16 03:25:57 +0000 UTC Type:0 Mac:52:54:00:53:5b:60 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:multinode-835787-m03 Clientid:01:52:54:00:53:5b:60}
	I0116 02:36:00.053439  994955 main.go:141] libmachine: (multinode-835787-m03) DBG | domain multinode-835787-m03 has defined IP address 192.168.39.123 and MAC address 52:54:00:53:5b:60 in network mk-multinode-835787
	I0116 02:36:00.053558  994955 provision.go:138] copyHostCerts
	I0116 02:36:00.053597  994955 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17967-971255/.minikube/ca.pem
	I0116 02:36:00.053635  994955 exec_runner.go:144] found /home/jenkins/minikube-integration/17967-971255/.minikube/ca.pem, removing ...
	I0116 02:36:00.053646  994955 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17967-971255/.minikube/ca.pem
	I0116 02:36:00.053727  994955 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17967-971255/.minikube/ca.pem (1082 bytes)
	I0116 02:36:00.053852  994955 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17967-971255/.minikube/cert.pem
	I0116 02:36:00.053877  994955 exec_runner.go:144] found /home/jenkins/minikube-integration/17967-971255/.minikube/cert.pem, removing ...
	I0116 02:36:00.053886  994955 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17967-971255/.minikube/cert.pem
	I0116 02:36:00.053921  994955 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17967-971255/.minikube/cert.pem (1123 bytes)
	I0116 02:36:00.053985  994955 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17967-971255/.minikube/key.pem
	I0116 02:36:00.054010  994955 exec_runner.go:144] found /home/jenkins/minikube-integration/17967-971255/.minikube/key.pem, removing ...
	I0116 02:36:00.054020  994955 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17967-971255/.minikube/key.pem
	I0116 02:36:00.054050  994955 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17967-971255/.minikube/key.pem (1675 bytes)
	I0116 02:36:00.054112  994955 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17967-971255/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca-key.pem org=jenkins.multinode-835787-m03 san=[192.168.39.123 192.168.39.123 localhost 127.0.0.1 minikube multinode-835787-m03]
	I0116 02:36:00.186288  994955 provision.go:172] copyRemoteCerts
	I0116 02:36:00.186363  994955 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0116 02:36:00.186394  994955 main.go:141] libmachine: (multinode-835787-m03) Calling .GetSSHHostname
	I0116 02:36:00.189154  994955 main.go:141] libmachine: (multinode-835787-m03) DBG | domain multinode-835787-m03 has defined MAC address 52:54:00:53:5b:60 in network mk-multinode-835787
	I0116 02:36:00.189587  994955 main.go:141] libmachine: (multinode-835787-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:5b:60", ip: ""} in network mk-multinode-835787: {Iface:virbr1 ExpiryTime:2024-01-16 03:25:57 +0000 UTC Type:0 Mac:52:54:00:53:5b:60 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:multinode-835787-m03 Clientid:01:52:54:00:53:5b:60}
	I0116 02:36:00.189622  994955 main.go:141] libmachine: (multinode-835787-m03) DBG | domain multinode-835787-m03 has defined IP address 192.168.39.123 and MAC address 52:54:00:53:5b:60 in network mk-multinode-835787
	I0116 02:36:00.189764  994955 main.go:141] libmachine: (multinode-835787-m03) Calling .GetSSHPort
	I0116 02:36:00.189996  994955 main.go:141] libmachine: (multinode-835787-m03) Calling .GetSSHKeyPath
	I0116 02:36:00.190182  994955 main.go:141] libmachine: (multinode-835787-m03) Calling .GetSSHUsername
	I0116 02:36:00.190298  994955 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/multinode-835787-m03/id_rsa Username:docker}
	I0116 02:36:00.287393  994955 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-971255/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0116 02:36:00.287461  994955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0116 02:36:00.315050  994955 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-971255/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0116 02:36:00.315163  994955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0116 02:36:00.341255  994955 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0116 02:36:00.341324  994955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0116 02:36:00.368218  994955 provision.go:86] duration metric: configureAuth took 321.220048ms
	I0116 02:36:00.368250  994955 buildroot.go:189] setting minikube options for container-runtime
	I0116 02:36:00.368461  994955 config.go:182] Loaded profile config "multinode-835787": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 02:36:00.368540  994955 main.go:141] libmachine: (multinode-835787-m03) Calling .GetSSHHostname
	I0116 02:36:00.371425  994955 main.go:141] libmachine: (multinode-835787-m03) DBG | domain multinode-835787-m03 has defined MAC address 52:54:00:53:5b:60 in network mk-multinode-835787
	I0116 02:36:00.371759  994955 main.go:141] libmachine: (multinode-835787-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:5b:60", ip: ""} in network mk-multinode-835787: {Iface:virbr1 ExpiryTime:2024-01-16 03:25:57 +0000 UTC Type:0 Mac:52:54:00:53:5b:60 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:multinode-835787-m03 Clientid:01:52:54:00:53:5b:60}
	I0116 02:36:00.371785  994955 main.go:141] libmachine: (multinode-835787-m03) DBG | domain multinode-835787-m03 has defined IP address 192.168.39.123 and MAC address 52:54:00:53:5b:60 in network mk-multinode-835787
	I0116 02:36:00.371986  994955 main.go:141] libmachine: (multinode-835787-m03) Calling .GetSSHPort
	I0116 02:36:00.372247  994955 main.go:141] libmachine: (multinode-835787-m03) Calling .GetSSHKeyPath
	I0116 02:36:00.372523  994955 main.go:141] libmachine: (multinode-835787-m03) Calling .GetSSHKeyPath
	I0116 02:36:00.372833  994955 main.go:141] libmachine: (multinode-835787-m03) Calling .GetSSHUsername
	I0116 02:36:00.373046  994955 main.go:141] libmachine: Using SSH client type: native
	I0116 02:36:00.373412  994955 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.123 22 <nil> <nil>}
	I0116 02:36:00.373429  994955 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0116 02:37:30.942938  994955 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0116 02:37:30.942979  994955 machine.go:91] provisioned docker machine in 1m31.177523385s
	I0116 02:37:30.943038  994955 start.go:300] post-start starting for "multinode-835787-m03" (driver="kvm2")
	I0116 02:37:30.943059  994955 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0116 02:37:30.943092  994955 main.go:141] libmachine: (multinode-835787-m03) Calling .DriverName
	I0116 02:37:30.943589  994955 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0116 02:37:30.943627  994955 main.go:141] libmachine: (multinode-835787-m03) Calling .GetSSHHostname
	I0116 02:37:30.946587  994955 main.go:141] libmachine: (multinode-835787-m03) DBG | domain multinode-835787-m03 has defined MAC address 52:54:00:53:5b:60 in network mk-multinode-835787
	I0116 02:37:30.947098  994955 main.go:141] libmachine: (multinode-835787-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:5b:60", ip: ""} in network mk-multinode-835787: {Iface:virbr1 ExpiryTime:2024-01-16 03:25:57 +0000 UTC Type:0 Mac:52:54:00:53:5b:60 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:multinode-835787-m03 Clientid:01:52:54:00:53:5b:60}
	I0116 02:37:30.947139  994955 main.go:141] libmachine: (multinode-835787-m03) DBG | domain multinode-835787-m03 has defined IP address 192.168.39.123 and MAC address 52:54:00:53:5b:60 in network mk-multinode-835787
	I0116 02:37:30.947338  994955 main.go:141] libmachine: (multinode-835787-m03) Calling .GetSSHPort
	I0116 02:37:30.947587  994955 main.go:141] libmachine: (multinode-835787-m03) Calling .GetSSHKeyPath
	I0116 02:37:30.947767  994955 main.go:141] libmachine: (multinode-835787-m03) Calling .GetSSHUsername
	I0116 02:37:30.947906  994955 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/multinode-835787-m03/id_rsa Username:docker}
	I0116 02:37:31.044237  994955 ssh_runner.go:195] Run: cat /etc/os-release
	I0116 02:37:31.048367  994955 command_runner.go:130] > NAME=Buildroot
	I0116 02:37:31.048393  994955 command_runner.go:130] > VERSION=2021.02.12-1-g19d536a-dirty
	I0116 02:37:31.048398  994955 command_runner.go:130] > ID=buildroot
	I0116 02:37:31.048403  994955 command_runner.go:130] > VERSION_ID=2021.02.12
	I0116 02:37:31.048408  994955 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0116 02:37:31.048624  994955 info.go:137] Remote host: Buildroot 2021.02.12
	I0116 02:37:31.048658  994955 filesync.go:126] Scanning /home/jenkins/minikube-integration/17967-971255/.minikube/addons for local assets ...
	I0116 02:37:31.048739  994955 filesync.go:126] Scanning /home/jenkins/minikube-integration/17967-971255/.minikube/files for local assets ...
	I0116 02:37:31.048810  994955 filesync.go:149] local asset: /home/jenkins/minikube-integration/17967-971255/.minikube/files/etc/ssl/certs/9784822.pem -> 9784822.pem in /etc/ssl/certs
	I0116 02:37:31.048821  994955 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-971255/.minikube/files/etc/ssl/certs/9784822.pem -> /etc/ssl/certs/9784822.pem
	I0116 02:37:31.048968  994955 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0116 02:37:31.057743  994955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/files/etc/ssl/certs/9784822.pem --> /etc/ssl/certs/9784822.pem (1708 bytes)
	I0116 02:37:31.082592  994955 start.go:303] post-start completed in 139.529666ms
	I0116 02:37:31.082633  994955 fix.go:56] fixHost completed within 1m31.339195649s
	I0116 02:37:31.082672  994955 main.go:141] libmachine: (multinode-835787-m03) Calling .GetSSHHostname
	I0116 02:37:31.085294  994955 main.go:141] libmachine: (multinode-835787-m03) DBG | domain multinode-835787-m03 has defined MAC address 52:54:00:53:5b:60 in network mk-multinode-835787
	I0116 02:37:31.085697  994955 main.go:141] libmachine: (multinode-835787-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:5b:60", ip: ""} in network mk-multinode-835787: {Iface:virbr1 ExpiryTime:2024-01-16 03:25:57 +0000 UTC Type:0 Mac:52:54:00:53:5b:60 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:multinode-835787-m03 Clientid:01:52:54:00:53:5b:60}
	I0116 02:37:31.085727  994955 main.go:141] libmachine: (multinode-835787-m03) DBG | domain multinode-835787-m03 has defined IP address 192.168.39.123 and MAC address 52:54:00:53:5b:60 in network mk-multinode-835787
	I0116 02:37:31.085904  994955 main.go:141] libmachine: (multinode-835787-m03) Calling .GetSSHPort
	I0116 02:37:31.086136  994955 main.go:141] libmachine: (multinode-835787-m03) Calling .GetSSHKeyPath
	I0116 02:37:31.086325  994955 main.go:141] libmachine: (multinode-835787-m03) Calling .GetSSHKeyPath
	I0116 02:37:31.086478  994955 main.go:141] libmachine: (multinode-835787-m03) Calling .GetSSHUsername
	I0116 02:37:31.086615  994955 main.go:141] libmachine: Using SSH client type: native
	I0116 02:37:31.087009  994955 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.123 22 <nil> <nil>}
	I0116 02:37:31.087021  994955 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0116 02:37:31.219211  994955 main.go:141] libmachine: SSH cmd err, output: <nil>: 1705372651.213969554
	
	I0116 02:37:31.219237  994955 fix.go:206] guest clock: 1705372651.213969554
	I0116 02:37:31.219246  994955 fix.go:219] Guest: 2024-01-16 02:37:31.213969554 +0000 UTC Remote: 2024-01-16 02:37:31.082639284 +0000 UTC m=+556.938985026 (delta=131.33027ms)
	I0116 02:37:31.219267  994955 fix.go:190] guest clock delta is within tolerance: 131.33027ms
	I0116 02:37:31.219276  994955 start.go:83] releasing machines lock for "multinode-835787-m03", held for 1m31.475857101s
	I0116 02:37:31.219302  994955 main.go:141] libmachine: (multinode-835787-m03) Calling .DriverName
	I0116 02:37:31.219593  994955 main.go:141] libmachine: (multinode-835787-m03) Calling .GetIP
	I0116 02:37:31.222336  994955 main.go:141] libmachine: (multinode-835787-m03) DBG | domain multinode-835787-m03 has defined MAC address 52:54:00:53:5b:60 in network mk-multinode-835787
	I0116 02:37:31.222813  994955 main.go:141] libmachine: (multinode-835787-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:5b:60", ip: ""} in network mk-multinode-835787: {Iface:virbr1 ExpiryTime:2024-01-16 03:25:57 +0000 UTC Type:0 Mac:52:54:00:53:5b:60 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:multinode-835787-m03 Clientid:01:52:54:00:53:5b:60}
	I0116 02:37:31.222852  994955 main.go:141] libmachine: (multinode-835787-m03) DBG | domain multinode-835787-m03 has defined IP address 192.168.39.123 and MAC address 52:54:00:53:5b:60 in network mk-multinode-835787
	I0116 02:37:31.224943  994955 out.go:177] * Found network options:
	I0116 02:37:31.226613  994955 out.go:177]   - NO_PROXY=192.168.39.50,192.168.39.15
	W0116 02:37:31.228142  994955 proxy.go:119] fail to check proxy env: Error ip not in block
	W0116 02:37:31.228167  994955 proxy.go:119] fail to check proxy env: Error ip not in block
	I0116 02:37:31.228183  994955 main.go:141] libmachine: (multinode-835787-m03) Calling .DriverName
	I0116 02:37:31.228861  994955 main.go:141] libmachine: (multinode-835787-m03) Calling .DriverName
	I0116 02:37:31.229065  994955 main.go:141] libmachine: (multinode-835787-m03) Calling .DriverName
	I0116 02:37:31.229180  994955 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0116 02:37:31.229244  994955 main.go:141] libmachine: (multinode-835787-m03) Calling .GetSSHHostname
	W0116 02:37:31.229242  994955 proxy.go:119] fail to check proxy env: Error ip not in block
	W0116 02:37:31.229303  994955 proxy.go:119] fail to check proxy env: Error ip not in block
	I0116 02:37:31.229379  994955 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0116 02:37:31.229398  994955 main.go:141] libmachine: (multinode-835787-m03) Calling .GetSSHHostname
	I0116 02:37:31.232078  994955 main.go:141] libmachine: (multinode-835787-m03) DBG | domain multinode-835787-m03 has defined MAC address 52:54:00:53:5b:60 in network mk-multinode-835787
	I0116 02:37:31.232167  994955 main.go:141] libmachine: (multinode-835787-m03) DBG | domain multinode-835787-m03 has defined MAC address 52:54:00:53:5b:60 in network mk-multinode-835787
	I0116 02:37:31.232566  994955 main.go:141] libmachine: (multinode-835787-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:5b:60", ip: ""} in network mk-multinode-835787: {Iface:virbr1 ExpiryTime:2024-01-16 03:25:57 +0000 UTC Type:0 Mac:52:54:00:53:5b:60 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:multinode-835787-m03 Clientid:01:52:54:00:53:5b:60}
	I0116 02:37:31.232602  994955 main.go:141] libmachine: (multinode-835787-m03) DBG | domain multinode-835787-m03 has defined IP address 192.168.39.123 and MAC address 52:54:00:53:5b:60 in network mk-multinode-835787
	I0116 02:37:31.232635  994955 main.go:141] libmachine: (multinode-835787-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:5b:60", ip: ""} in network mk-multinode-835787: {Iface:virbr1 ExpiryTime:2024-01-16 03:25:57 +0000 UTC Type:0 Mac:52:54:00:53:5b:60 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:multinode-835787-m03 Clientid:01:52:54:00:53:5b:60}
	I0116 02:37:31.232661  994955 main.go:141] libmachine: (multinode-835787-m03) DBG | domain multinode-835787-m03 has defined IP address 192.168.39.123 and MAC address 52:54:00:53:5b:60 in network mk-multinode-835787
	I0116 02:37:31.232720  994955 main.go:141] libmachine: (multinode-835787-m03) Calling .GetSSHPort
	I0116 02:37:31.232912  994955 main.go:141] libmachine: (multinode-835787-m03) Calling .GetSSHPort
	I0116 02:37:31.232945  994955 main.go:141] libmachine: (multinode-835787-m03) Calling .GetSSHKeyPath
	I0116 02:37:31.233120  994955 main.go:141] libmachine: (multinode-835787-m03) Calling .GetSSHUsername
	I0116 02:37:31.233131  994955 main.go:141] libmachine: (multinode-835787-m03) Calling .GetSSHKeyPath
	I0116 02:37:31.233291  994955 main.go:141] libmachine: (multinode-835787-m03) Calling .GetSSHUsername
	I0116 02:37:31.233281  994955 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/multinode-835787-m03/id_rsa Username:docker}
	I0116 02:37:31.233447  994955 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/multinode-835787-m03/id_rsa Username:docker}
	I0116 02:37:31.350674  994955 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0116 02:37:31.473831  994955 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0116 02:37:31.479857  994955 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0116 02:37:31.480006  994955 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0116 02:37:31.480081  994955 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0116 02:37:31.488531  994955 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0116 02:37:31.488564  994955 start.go:475] detecting cgroup driver to use...
	I0116 02:37:31.488665  994955 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0116 02:37:31.503238  994955 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0116 02:37:31.516332  994955 docker.go:217] disabling cri-docker service (if available) ...
	I0116 02:37:31.516394  994955 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0116 02:37:31.530221  994955 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0116 02:37:31.543814  994955 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0116 02:37:31.668094  994955 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0116 02:37:31.788792  994955 docker.go:233] disabling docker service ...
	I0116 02:37:31.788867  994955 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0116 02:37:31.805047  994955 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0116 02:37:31.818673  994955 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0116 02:37:31.942018  994955 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0116 02:37:32.061410  994955 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0116 02:37:32.075072  994955 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0116 02:37:32.093610  994955 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0116 02:37:32.094036  994955 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0116 02:37:32.094117  994955 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 02:37:32.105038  994955 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0116 02:37:32.105123  994955 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 02:37:32.116259  994955 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 02:37:32.126983  994955 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 02:37:32.138013  994955 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0116 02:37:32.148633  994955 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0116 02:37:32.157786  994955 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0116 02:37:32.157902  994955 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0116 02:37:32.167097  994955 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0116 02:37:32.291645  994955 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0116 02:37:32.523100  994955 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0116 02:37:32.523184  994955 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0116 02:37:32.528931  994955 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0116 02:37:32.528965  994955 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0116 02:37:32.528975  994955 command_runner.go:130] > Device: 16h/22d	Inode: 1173        Links: 1
	I0116 02:37:32.528986  994955 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0116 02:37:32.528995  994955 command_runner.go:130] > Access: 2024-01-16 02:37:32.448292099 +0000
	I0116 02:37:32.529003  994955 command_runner.go:130] > Modify: 2024-01-16 02:37:32.448292099 +0000
	I0116 02:37:32.529013  994955 command_runner.go:130] > Change: 2024-01-16 02:37:32.448292099 +0000
	I0116 02:37:32.529019  994955 command_runner.go:130] >  Birth: -
	I0116 02:37:32.529289  994955 start.go:543] Will wait 60s for crictl version
	I0116 02:37:32.529356  994955 ssh_runner.go:195] Run: which crictl
	I0116 02:37:32.533469  994955 command_runner.go:130] > /usr/bin/crictl
	I0116 02:37:32.533535  994955 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0116 02:37:32.583210  994955 command_runner.go:130] > Version:  0.1.0
	I0116 02:37:32.583243  994955 command_runner.go:130] > RuntimeName:  cri-o
	I0116 02:37:32.583252  994955 command_runner.go:130] > RuntimeVersion:  1.24.1
	I0116 02:37:32.583261  994955 command_runner.go:130] > RuntimeApiVersion:  v1
	I0116 02:37:32.584677  994955 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0116 02:37:32.584772  994955 ssh_runner.go:195] Run: crio --version
	I0116 02:37:32.651770  994955 command_runner.go:130] > crio version 1.24.1
	I0116 02:37:32.651797  994955 command_runner.go:130] > Version:          1.24.1
	I0116 02:37:32.651804  994955 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0116 02:37:32.651808  994955 command_runner.go:130] > GitTreeState:     dirty
	I0116 02:37:32.651814  994955 command_runner.go:130] > BuildDate:        2023-12-28T22:46:29Z
	I0116 02:37:32.651819  994955 command_runner.go:130] > GoVersion:        go1.19.9
	I0116 02:37:32.651823  994955 command_runner.go:130] > Compiler:         gc
	I0116 02:37:32.651828  994955 command_runner.go:130] > Platform:         linux/amd64
	I0116 02:37:32.651834  994955 command_runner.go:130] > Linkmode:         dynamic
	I0116 02:37:32.651841  994955 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0116 02:37:32.651854  994955 command_runner.go:130] > SeccompEnabled:   true
	I0116 02:37:32.651858  994955 command_runner.go:130] > AppArmorEnabled:  false
	I0116 02:37:32.653355  994955 ssh_runner.go:195] Run: crio --version
	I0116 02:37:32.709419  994955 command_runner.go:130] > crio version 1.24.1
	I0116 02:37:32.709447  994955 command_runner.go:130] > Version:          1.24.1
	I0116 02:37:32.709458  994955 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0116 02:37:32.709463  994955 command_runner.go:130] > GitTreeState:     dirty
	I0116 02:37:32.709469  994955 command_runner.go:130] > BuildDate:        2023-12-28T22:46:29Z
	I0116 02:37:32.709474  994955 command_runner.go:130] > GoVersion:        go1.19.9
	I0116 02:37:32.709478  994955 command_runner.go:130] > Compiler:         gc
	I0116 02:37:32.709483  994955 command_runner.go:130] > Platform:         linux/amd64
	I0116 02:37:32.709488  994955 command_runner.go:130] > Linkmode:         dynamic
	I0116 02:37:32.709497  994955 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0116 02:37:32.709501  994955 command_runner.go:130] > SeccompEnabled:   true
	I0116 02:37:32.709505  994955 command_runner.go:130] > AppArmorEnabled:  false
	I0116 02:37:32.713214  994955 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0116 02:37:32.714909  994955 out.go:177]   - env NO_PROXY=192.168.39.50
	I0116 02:37:32.716504  994955 out.go:177]   - env NO_PROXY=192.168.39.50,192.168.39.15
	I0116 02:37:32.717940  994955 main.go:141] libmachine: (multinode-835787-m03) Calling .GetIP
	I0116 02:37:32.720732  994955 main.go:141] libmachine: (multinode-835787-m03) DBG | domain multinode-835787-m03 has defined MAC address 52:54:00:53:5b:60 in network mk-multinode-835787
	I0116 02:37:32.721074  994955 main.go:141] libmachine: (multinode-835787-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:5b:60", ip: ""} in network mk-multinode-835787: {Iface:virbr1 ExpiryTime:2024-01-16 03:25:57 +0000 UTC Type:0 Mac:52:54:00:53:5b:60 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:multinode-835787-m03 Clientid:01:52:54:00:53:5b:60}
	I0116 02:37:32.721100  994955 main.go:141] libmachine: (multinode-835787-m03) DBG | domain multinode-835787-m03 has defined IP address 192.168.39.123 and MAC address 52:54:00:53:5b:60 in network mk-multinode-835787
	I0116 02:37:32.721371  994955 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0116 02:37:32.725837  994955 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0116 02:37:32.726192  994955 certs.go:56] Setting up /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/multinode-835787 for IP: 192.168.39.123
	I0116 02:37:32.726226  994955 certs.go:190] acquiring lock for shared ca certs: {Name:mk7c5920652340a030ac1e36e0fe21cc8437c491 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:37:32.726396  994955 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17967-971255/.minikube/ca.key
	I0116 02:37:32.726446  994955 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17967-971255/.minikube/proxy-client-ca.key
	I0116 02:37:32.726466  994955 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-971255/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0116 02:37:32.726488  994955 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-971255/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0116 02:37:32.726506  994955 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-971255/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0116 02:37:32.726524  994955 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-971255/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0116 02:37:32.726592  994955 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/home/jenkins/minikube-integration/17967-971255/.minikube/certs/978482.pem (1338 bytes)
	W0116 02:37:32.726637  994955 certs.go:433] ignoring /home/jenkins/minikube-integration/17967-971255/.minikube/certs/home/jenkins/minikube-integration/17967-971255/.minikube/certs/978482_empty.pem, impossibly tiny 0 bytes
	I0116 02:37:32.726651  994955 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca-key.pem (1675 bytes)
	I0116 02:37:32.726682  994955 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca.pem (1082 bytes)
	I0116 02:37:32.726717  994955 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/home/jenkins/minikube-integration/17967-971255/.minikube/certs/cert.pem (1123 bytes)
	I0116 02:37:32.726750  994955 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/home/jenkins/minikube-integration/17967-971255/.minikube/certs/key.pem (1675 bytes)
	I0116 02:37:32.726808  994955 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-971255/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17967-971255/.minikube/files/etc/ssl/certs/9784822.pem (1708 bytes)
	I0116 02:37:32.726843  994955 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-971255/.minikube/files/etc/ssl/certs/9784822.pem -> /usr/share/ca-certificates/9784822.pem
	I0116 02:37:32.726867  994955 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-971255/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0116 02:37:32.726887  994955 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/978482.pem -> /usr/share/ca-certificates/978482.pem
	I0116 02:37:32.727452  994955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0116 02:37:32.752173  994955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0116 02:37:32.774686  994955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0116 02:37:32.797526  994955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0116 02:37:32.819916  994955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/files/etc/ssl/certs/9784822.pem --> /usr/share/ca-certificates/9784822.pem (1708 bytes)
	I0116 02:37:32.842788  994955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0116 02:37:32.865517  994955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/certs/978482.pem --> /usr/share/ca-certificates/978482.pem (1338 bytes)
	I0116 02:37:32.888594  994955 ssh_runner.go:195] Run: openssl version
	I0116 02:37:32.894358  994955 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0116 02:37:32.894576  994955 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0116 02:37:32.904779  994955 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0116 02:37:32.909343  994955 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jan 16 02:01 /usr/share/ca-certificates/minikubeCA.pem
	I0116 02:37:32.909564  994955 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 16 02:01 /usr/share/ca-certificates/minikubeCA.pem
	I0116 02:37:32.909630  994955 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0116 02:37:32.915556  994955 command_runner.go:130] > b5213941
	I0116 02:37:32.915616  994955 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0116 02:37:32.924550  994955 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/978482.pem && ln -fs /usr/share/ca-certificates/978482.pem /etc/ssl/certs/978482.pem"
	I0116 02:37:32.934863  994955 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/978482.pem
	I0116 02:37:32.939385  994955 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jan 16 02:10 /usr/share/ca-certificates/978482.pem
	I0116 02:37:32.939592  994955 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 16 02:10 /usr/share/ca-certificates/978482.pem
	I0116 02:37:32.939644  994955 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/978482.pem
	I0116 02:37:32.945633  994955 command_runner.go:130] > 51391683
	I0116 02:37:32.945722  994955 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/978482.pem /etc/ssl/certs/51391683.0"
	I0116 02:37:32.956127  994955 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9784822.pem && ln -fs /usr/share/ca-certificates/9784822.pem /etc/ssl/certs/9784822.pem"
	I0116 02:37:32.967430  994955 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9784822.pem
	I0116 02:37:32.971858  994955 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jan 16 02:10 /usr/share/ca-certificates/9784822.pem
	I0116 02:37:32.972214  994955 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 16 02:10 /usr/share/ca-certificates/9784822.pem
	I0116 02:37:32.972275  994955 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9784822.pem
	I0116 02:37:32.978195  994955 command_runner.go:130] > 3ec20f2e
	I0116 02:37:32.978268  994955 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9784822.pem /etc/ssl/certs/3ec20f2e.0"
	I0116 02:37:32.987498  994955 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0116 02:37:32.991520  994955 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0116 02:37:32.991674  994955 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0116 02:37:32.991778  994955 ssh_runner.go:195] Run: crio config
	I0116 02:37:33.045162  994955 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0116 02:37:33.045201  994955 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0116 02:37:33.045212  994955 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0116 02:37:33.045217  994955 command_runner.go:130] > #
	I0116 02:37:33.045228  994955 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0116 02:37:33.045238  994955 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0116 02:37:33.045248  994955 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0116 02:37:33.045258  994955 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0116 02:37:33.045276  994955 command_runner.go:130] > # reload'.
	I0116 02:37:33.045300  994955 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0116 02:37:33.045310  994955 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0116 02:37:33.045320  994955 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0116 02:37:33.045329  994955 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0116 02:37:33.045335  994955 command_runner.go:130] > [crio]
	I0116 02:37:33.045345  994955 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0116 02:37:33.045354  994955 command_runner.go:130] > # containers images, in this directory.
	I0116 02:37:33.045362  994955 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0116 02:37:33.045376  994955 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0116 02:37:33.045388  994955 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0116 02:37:33.045398  994955 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0116 02:37:33.045412  994955 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0116 02:37:33.045421  994955 command_runner.go:130] > storage_driver = "overlay"
	I0116 02:37:33.045433  994955 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0116 02:37:33.045442  994955 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0116 02:37:33.045452  994955 command_runner.go:130] > storage_option = [
	I0116 02:37:33.045460  994955 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0116 02:37:33.045469  994955 command_runner.go:130] > ]
	I0116 02:37:33.045479  994955 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0116 02:37:33.045492  994955 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0116 02:37:33.045503  994955 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0116 02:37:33.045512  994955 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0116 02:37:33.045525  994955 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0116 02:37:33.045535  994955 command_runner.go:130] > # always happen on a node reboot
	I0116 02:37:33.045546  994955 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0116 02:37:33.045558  994955 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0116 02:37:33.045568  994955 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0116 02:37:33.045590  994955 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0116 02:37:33.045603  994955 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0116 02:37:33.045618  994955 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0116 02:37:33.045633  994955 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0116 02:37:33.045647  994955 command_runner.go:130] > # internal_wipe = true
	I0116 02:37:33.045659  994955 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0116 02:37:33.045672  994955 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0116 02:37:33.045685  994955 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0116 02:37:33.045696  994955 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0116 02:37:33.045720  994955 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0116 02:37:33.045729  994955 command_runner.go:130] > [crio.api]
	I0116 02:37:33.045738  994955 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0116 02:37:33.045749  994955 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0116 02:37:33.045760  994955 command_runner.go:130] > # IP address on which the stream server will listen.
	I0116 02:37:33.045770  994955 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0116 02:37:33.045782  994955 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0116 02:37:33.045794  994955 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0116 02:37:33.045813  994955 command_runner.go:130] > # stream_port = "0"
	I0116 02:37:33.045821  994955 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0116 02:37:33.045832  994955 command_runner.go:130] > # stream_enable_tls = false
	I0116 02:37:33.045842  994955 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0116 02:37:33.045852  994955 command_runner.go:130] > # stream_idle_timeout = ""
	I0116 02:37:33.045861  994955 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0116 02:37:33.045874  994955 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0116 02:37:33.045884  994955 command_runner.go:130] > # minutes.
	I0116 02:37:33.045891  994955 command_runner.go:130] > # stream_tls_cert = ""
	I0116 02:37:33.045904  994955 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0116 02:37:33.045917  994955 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0116 02:37:33.045927  994955 command_runner.go:130] > # stream_tls_key = ""
	I0116 02:37:33.045939  994955 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0116 02:37:33.045952  994955 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0116 02:37:33.045964  994955 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0116 02:37:33.046006  994955 command_runner.go:130] > # stream_tls_ca = ""
	I0116 02:37:33.046032  994955 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0116 02:37:33.046039  994955 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0116 02:37:33.046051  994955 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0116 02:37:33.046062  994955 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0116 02:37:33.046090  994955 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0116 02:37:33.046102  994955 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0116 02:37:33.046112  994955 command_runner.go:130] > [crio.runtime]
	I0116 02:37:33.046125  994955 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0116 02:37:33.046137  994955 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0116 02:37:33.046148  994955 command_runner.go:130] > # "nofile=1024:2048"
	I0116 02:37:33.046160  994955 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0116 02:37:33.046171  994955 command_runner.go:130] > # default_ulimits = [
	I0116 02:37:33.046177  994955 command_runner.go:130] > # ]
	I0116 02:37:33.046190  994955 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0116 02:37:33.046200  994955 command_runner.go:130] > # no_pivot = false
	I0116 02:37:33.046209  994955 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0116 02:37:33.046222  994955 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0116 02:37:33.046232  994955 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0116 02:37:33.046241  994955 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0116 02:37:33.046252  994955 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0116 02:37:33.046265  994955 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0116 02:37:33.046276  994955 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0116 02:37:33.046287  994955 command_runner.go:130] > # Cgroup setting for conmon
	I0116 02:37:33.046300  994955 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0116 02:37:33.046311  994955 command_runner.go:130] > conmon_cgroup = "pod"
	I0116 02:37:33.046324  994955 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0116 02:37:33.046335  994955 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0116 02:37:33.046349  994955 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0116 02:37:33.046359  994955 command_runner.go:130] > conmon_env = [
	I0116 02:37:33.046368  994955 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0116 02:37:33.046377  994955 command_runner.go:130] > ]
	I0116 02:37:33.046386  994955 command_runner.go:130] > # Additional environment variables to set for all the
	I0116 02:37:33.046398  994955 command_runner.go:130] > # containers. These are overridden if set in the
	I0116 02:37:33.046408  994955 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0116 02:37:33.046418  994955 command_runner.go:130] > # default_env = [
	I0116 02:37:33.046424  994955 command_runner.go:130] > # ]
	I0116 02:37:33.046433  994955 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0116 02:37:33.046443  994955 command_runner.go:130] > # selinux = false
	I0116 02:37:33.046453  994955 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0116 02:37:33.046467  994955 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0116 02:37:33.046478  994955 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0116 02:37:33.046488  994955 command_runner.go:130] > # seccomp_profile = ""
	I0116 02:37:33.046497  994955 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0116 02:37:33.046510  994955 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0116 02:37:33.046520  994955 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0116 02:37:33.046531  994955 command_runner.go:130] > # which might increase security.
	I0116 02:37:33.046539  994955 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0116 02:37:33.046554  994955 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0116 02:37:33.046567  994955 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0116 02:37:33.046580  994955 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0116 02:37:33.046594  994955 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0116 02:37:33.046605  994955 command_runner.go:130] > # This option supports live configuration reload.
	I0116 02:37:33.046616  994955 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0116 02:37:33.046628  994955 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0116 02:37:33.046639  994955 command_runner.go:130] > # the cgroup blockio controller.
	I0116 02:37:33.046646  994955 command_runner.go:130] > # blockio_config_file = ""
	I0116 02:37:33.046720  994955 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0116 02:37:33.046740  994955 command_runner.go:130] > # irqbalance daemon.
	I0116 02:37:33.046749  994955 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0116 02:37:33.046760  994955 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0116 02:37:33.046768  994955 command_runner.go:130] > # This option supports live configuration reload.
	I0116 02:37:33.046777  994955 command_runner.go:130] > # rdt_config_file = ""
	I0116 02:37:33.046789  994955 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0116 02:37:33.046797  994955 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0116 02:37:33.046811  994955 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0116 02:37:33.046819  994955 command_runner.go:130] > # separate_pull_cgroup = ""
	I0116 02:37:33.046832  994955 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0116 02:37:33.046843  994955 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0116 02:37:33.046853  994955 command_runner.go:130] > # will be added.
	I0116 02:37:33.046861  994955 command_runner.go:130] > # default_capabilities = [
	I0116 02:37:33.046870  994955 command_runner.go:130] > # 	"CHOWN",
	I0116 02:37:33.046879  994955 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0116 02:37:33.046889  994955 command_runner.go:130] > # 	"FSETID",
	I0116 02:37:33.046895  994955 command_runner.go:130] > # 	"FOWNER",
	I0116 02:37:33.046904  994955 command_runner.go:130] > # 	"SETGID",
	I0116 02:37:33.046911  994955 command_runner.go:130] > # 	"SETUID",
	I0116 02:37:33.046920  994955 command_runner.go:130] > # 	"SETPCAP",
	I0116 02:37:33.046929  994955 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0116 02:37:33.046939  994955 command_runner.go:130] > # 	"KILL",
	I0116 02:37:33.046948  994955 command_runner.go:130] > # ]
	I0116 02:37:33.046961  994955 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0116 02:37:33.046973  994955 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0116 02:37:33.046983  994955 command_runner.go:130] > # default_sysctls = [
	I0116 02:37:33.046995  994955 command_runner.go:130] > # ]
	I0116 02:37:33.047006  994955 command_runner.go:130] > # List of devices on the host that a
	I0116 02:37:33.047020  994955 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0116 02:37:33.047031  994955 command_runner.go:130] > # allowed_devices = [
	I0116 02:37:33.047037  994955 command_runner.go:130] > # 	"/dev/fuse",
	I0116 02:37:33.047047  994955 command_runner.go:130] > # ]
	I0116 02:37:33.047054  994955 command_runner.go:130] > # List of additional devices. specified as
	I0116 02:37:33.047066  994955 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0116 02:37:33.047076  994955 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0116 02:37:33.047101  994955 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0116 02:37:33.047110  994955 command_runner.go:130] > # additional_devices = [
	I0116 02:37:33.047116  994955 command_runner.go:130] > # ]
	I0116 02:37:33.047129  994955 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0116 02:37:33.047136  994955 command_runner.go:130] > # cdi_spec_dirs = [
	I0116 02:37:33.047145  994955 command_runner.go:130] > # 	"/etc/cdi",
	I0116 02:37:33.047152  994955 command_runner.go:130] > # 	"/var/run/cdi",
	I0116 02:37:33.047160  994955 command_runner.go:130] > # ]
	I0116 02:37:33.047170  994955 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0116 02:37:33.047183  994955 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0116 02:37:33.047193  994955 command_runner.go:130] > # Defaults to false.
	I0116 02:37:33.047204  994955 command_runner.go:130] > # device_ownership_from_security_context = false
	I0116 02:37:33.047218  994955 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0116 02:37:33.047231  994955 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0116 02:37:33.047238  994955 command_runner.go:130] > # hooks_dir = [
	I0116 02:37:33.047248  994955 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0116 02:37:33.047256  994955 command_runner.go:130] > # ]
	I0116 02:37:33.047266  994955 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0116 02:37:33.047280  994955 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0116 02:37:33.047291  994955 command_runner.go:130] > # its default mounts from the following two files:
	I0116 02:37:33.047299  994955 command_runner.go:130] > #
	I0116 02:37:33.047309  994955 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0116 02:37:33.047322  994955 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0116 02:37:33.047332  994955 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0116 02:37:33.047338  994955 command_runner.go:130] > #
	I0116 02:37:33.047351  994955 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0116 02:37:33.047364  994955 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0116 02:37:33.047379  994955 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0116 02:37:33.047390  994955 command_runner.go:130] > #      only add mounts it finds in this file.
	I0116 02:37:33.047395  994955 command_runner.go:130] > #
	I0116 02:37:33.047403  994955 command_runner.go:130] > # default_mounts_file = ""
	I0116 02:37:33.047414  994955 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0116 02:37:33.047425  994955 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0116 02:37:33.047434  994955 command_runner.go:130] > pids_limit = 1024
	I0116 02:37:33.047444  994955 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0116 02:37:33.047458  994955 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0116 02:37:33.047472  994955 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0116 02:37:33.047488  994955 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0116 02:37:33.047497  994955 command_runner.go:130] > # log_size_max = -1
	I0116 02:37:33.047508  994955 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0116 02:37:33.047519  994955 command_runner.go:130] > # log_to_journald = false
	I0116 02:37:33.047529  994955 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0116 02:37:33.047541  994955 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0116 02:37:33.047550  994955 command_runner.go:130] > # Path to directory for container attach sockets.
	I0116 02:37:33.047561  994955 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0116 02:37:33.047573  994955 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0116 02:37:33.047580  994955 command_runner.go:130] > # bind_mount_prefix = ""
	I0116 02:37:33.047591  994955 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0116 02:37:33.047598  994955 command_runner.go:130] > # read_only = false
	I0116 02:37:33.047610  994955 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0116 02:37:33.047623  994955 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0116 02:37:33.047633  994955 command_runner.go:130] > # live configuration reload.
	I0116 02:37:33.047640  994955 command_runner.go:130] > # log_level = "info"
	I0116 02:37:33.047652  994955 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0116 02:37:33.047663  994955 command_runner.go:130] > # This option supports live configuration reload.
	I0116 02:37:33.047672  994955 command_runner.go:130] > # log_filter = ""
	I0116 02:37:33.047684  994955 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0116 02:37:33.047694  994955 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0116 02:37:33.047709  994955 command_runner.go:130] > # separated by comma.
	I0116 02:37:33.047718  994955 command_runner.go:130] > # uid_mappings = ""
	I0116 02:37:33.047730  994955 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0116 02:37:33.047743  994955 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0116 02:37:33.047753  994955 command_runner.go:130] > # separated by comma.
	I0116 02:37:33.047760  994955 command_runner.go:130] > # gid_mappings = ""
	I0116 02:37:33.047774  994955 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0116 02:37:33.047786  994955 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0116 02:37:33.047796  994955 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0116 02:37:33.047807  994955 command_runner.go:130] > # minimum_mappable_uid = -1
	I0116 02:37:33.047820  994955 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0116 02:37:33.047833  994955 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0116 02:37:33.047846  994955 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0116 02:37:33.047856  994955 command_runner.go:130] > # minimum_mappable_gid = -1
	I0116 02:37:33.047866  994955 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0116 02:37:33.047879  994955 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0116 02:37:33.047889  994955 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0116 02:37:33.047898  994955 command_runner.go:130] > # ctr_stop_timeout = 30
	I0116 02:37:33.047908  994955 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0116 02:37:33.047921  994955 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0116 02:37:33.047932  994955 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0116 02:37:33.047943  994955 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0116 02:37:33.047955  994955 command_runner.go:130] > drop_infra_ctr = false
	I0116 02:37:33.047968  994955 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0116 02:37:33.047980  994955 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0116 02:37:33.047995  994955 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0116 02:37:33.048005  994955 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0116 02:37:33.048018  994955 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0116 02:37:33.048029  994955 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0116 02:37:33.048039  994955 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0116 02:37:33.048053  994955 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0116 02:37:33.048063  994955 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0116 02:37:33.048076  994955 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0116 02:37:33.048089  994955 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0116 02:37:33.048102  994955 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0116 02:37:33.048112  994955 command_runner.go:130] > # default_runtime = "runc"
	I0116 02:37:33.048123  994955 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0116 02:37:33.048137  994955 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0116 02:37:33.048155  994955 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0116 02:37:33.048166  994955 command_runner.go:130] > # creation as a file is not desired either.
	I0116 02:37:33.048178  994955 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0116 02:37:33.048187  994955 command_runner.go:130] > # the hostname is being managed dynamically.
	I0116 02:37:33.048194  994955 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0116 02:37:33.048198  994955 command_runner.go:130] > # ]
	I0116 02:37:33.048204  994955 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0116 02:37:33.048211  994955 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0116 02:37:33.048221  994955 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0116 02:37:33.048229  994955 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0116 02:37:33.048236  994955 command_runner.go:130] > #
	I0116 02:37:33.048241  994955 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0116 02:37:33.048248  994955 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0116 02:37:33.048253  994955 command_runner.go:130] > #  runtime_type = "oci"
	I0116 02:37:33.048260  994955 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0116 02:37:33.048265  994955 command_runner.go:130] > #  privileged_without_host_devices = false
	I0116 02:37:33.048271  994955 command_runner.go:130] > #  allowed_annotations = []
	I0116 02:37:33.048275  994955 command_runner.go:130] > # Where:
	I0116 02:37:33.048281  994955 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0116 02:37:33.048289  994955 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0116 02:37:33.048295  994955 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0116 02:37:33.048304  994955 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0116 02:37:33.048308  994955 command_runner.go:130] > #   in $PATH.
	I0116 02:37:33.048314  994955 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0116 02:37:33.048321  994955 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0116 02:37:33.048328  994955 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0116 02:37:33.048333  994955 command_runner.go:130] > #   state.
	I0116 02:37:33.048340  994955 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0116 02:37:33.048347  994955 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0116 02:37:33.048354  994955 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0116 02:37:33.048361  994955 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0116 02:37:33.048367  994955 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0116 02:37:33.048375  994955 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0116 02:37:33.048380  994955 command_runner.go:130] > #   The currently recognized values are:
	I0116 02:37:33.048389  994955 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0116 02:37:33.048396  994955 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0116 02:37:33.048404  994955 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0116 02:37:33.048410  994955 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0116 02:37:33.048419  994955 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0116 02:37:33.048427  994955 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0116 02:37:33.048435  994955 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0116 02:37:33.048441  994955 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0116 02:37:33.048449  994955 command_runner.go:130] > #   should be moved to the container's cgroup
	I0116 02:37:33.048453  994955 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0116 02:37:33.048460  994955 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0116 02:37:33.048464  994955 command_runner.go:130] > runtime_type = "oci"
	I0116 02:37:33.048470  994955 command_runner.go:130] > runtime_root = "/run/runc"
	I0116 02:37:33.048475  994955 command_runner.go:130] > runtime_config_path = ""
	I0116 02:37:33.048481  994955 command_runner.go:130] > monitor_path = ""
	I0116 02:37:33.048485  994955 command_runner.go:130] > monitor_cgroup = ""
	I0116 02:37:33.048492  994955 command_runner.go:130] > monitor_exec_cgroup = ""
	I0116 02:37:33.048498  994955 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0116 02:37:33.048504  994955 command_runner.go:130] > # running containers
	I0116 02:37:33.048509  994955 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0116 02:37:33.048517  994955 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0116 02:37:33.048544  994955 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0116 02:37:33.048552  994955 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0116 02:37:33.048557  994955 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0116 02:37:33.048564  994955 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0116 02:37:33.048569  994955 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0116 02:37:33.048576  994955 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0116 02:37:33.048581  994955 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0116 02:37:33.048587  994955 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0116 02:37:33.048594  994955 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0116 02:37:33.048601  994955 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0116 02:37:33.048609  994955 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0116 02:37:33.048619  994955 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0116 02:37:33.048629  994955 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0116 02:37:33.048637  994955 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0116 02:37:33.048646  994955 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0116 02:37:33.048655  994955 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0116 02:37:33.048663  994955 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0116 02:37:33.048673  994955 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0116 02:37:33.048679  994955 command_runner.go:130] > # Example:
	I0116 02:37:33.048684  994955 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0116 02:37:33.048692  994955 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0116 02:37:33.048698  994955 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0116 02:37:33.048709  994955 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0116 02:37:33.048714  994955 command_runner.go:130] > # cpuset = 0
	I0116 02:37:33.048718  994955 command_runner.go:130] > # cpushares = "0-1"
	I0116 02:37:33.048724  994955 command_runner.go:130] > # Where:
	I0116 02:37:33.048729  994955 command_runner.go:130] > # The workload name is workload-type.
	I0116 02:37:33.048745  994955 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0116 02:37:33.048752  994955 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0116 02:37:33.048760  994955 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0116 02:37:33.048770  994955 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0116 02:37:33.048778  994955 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0116 02:37:33.048783  994955 command_runner.go:130] > # 
	I0116 02:37:33.048790  994955 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0116 02:37:33.048796  994955 command_runner.go:130] > #
	I0116 02:37:33.048802  994955 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0116 02:37:33.048809  994955 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0116 02:37:33.048817  994955 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0116 02:37:33.048825  994955 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0116 02:37:33.048834  994955 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0116 02:37:33.048840  994955 command_runner.go:130] > [crio.image]
	I0116 02:37:33.048846  994955 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0116 02:37:33.048853  994955 command_runner.go:130] > # default_transport = "docker://"
	I0116 02:37:33.048859  994955 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0116 02:37:33.048867  994955 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0116 02:37:33.048874  994955 command_runner.go:130] > # global_auth_file = ""
	I0116 02:37:33.048879  994955 command_runner.go:130] > # The image used to instantiate infra containers.
	I0116 02:37:33.048886  994955 command_runner.go:130] > # This option supports live configuration reload.
	I0116 02:37:33.048891  994955 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0116 02:37:33.048899  994955 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0116 02:37:33.048907  994955 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0116 02:37:33.048915  994955 command_runner.go:130] > # This option supports live configuration reload.
	I0116 02:37:33.048919  994955 command_runner.go:130] > # pause_image_auth_file = ""
	I0116 02:37:33.048929  994955 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0116 02:37:33.048936  994955 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0116 02:37:33.048943  994955 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0116 02:37:33.048951  994955 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0116 02:37:33.048956  994955 command_runner.go:130] > # pause_command = "/pause"
	I0116 02:37:33.048964  994955 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0116 02:37:33.048973  994955 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0116 02:37:33.048979  994955 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0116 02:37:33.048987  994955 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0116 02:37:33.048995  994955 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0116 02:37:33.049002  994955 command_runner.go:130] > # signature_policy = ""
	I0116 02:37:33.049008  994955 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0116 02:37:33.049016  994955 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0116 02:37:33.049022  994955 command_runner.go:130] > # changing them here.
	I0116 02:37:33.049027  994955 command_runner.go:130] > # insecure_registries = [
	I0116 02:37:33.049032  994955 command_runner.go:130] > # ]
	I0116 02:37:33.049041  994955 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0116 02:37:33.049048  994955 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0116 02:37:33.049052  994955 command_runner.go:130] > # image_volumes = "mkdir"
	I0116 02:37:33.049059  994955 command_runner.go:130] > # Temporary directory to use for storing big files
	I0116 02:37:33.049064  994955 command_runner.go:130] > # big_files_temporary_dir = ""
	I0116 02:37:33.049072  994955 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0116 02:37:33.049079  994955 command_runner.go:130] > # CNI plugins.
	I0116 02:37:33.049083  994955 command_runner.go:130] > [crio.network]
	I0116 02:37:33.049091  994955 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0116 02:37:33.049099  994955 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0116 02:37:33.049103  994955 command_runner.go:130] > # cni_default_network = ""
	I0116 02:37:33.049111  994955 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0116 02:37:33.049118  994955 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0116 02:37:33.049126  994955 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0116 02:37:33.049132  994955 command_runner.go:130] > # plugin_dirs = [
	I0116 02:37:33.049136  994955 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0116 02:37:33.049140  994955 command_runner.go:130] > # ]
	I0116 02:37:33.049146  994955 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0116 02:37:33.049152  994955 command_runner.go:130] > [crio.metrics]
	I0116 02:37:33.049157  994955 command_runner.go:130] > # Globally enable or disable metrics support.
	I0116 02:37:33.049164  994955 command_runner.go:130] > enable_metrics = true
	I0116 02:37:33.049168  994955 command_runner.go:130] > # Specify enabled metrics collectors.
	I0116 02:37:33.049176  994955 command_runner.go:130] > # Per default all metrics are enabled.
	I0116 02:37:33.049183  994955 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0116 02:37:33.049192  994955 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0116 02:37:33.049197  994955 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0116 02:37:33.049204  994955 command_runner.go:130] > # metrics_collectors = [
	I0116 02:37:33.049208  994955 command_runner.go:130] > # 	"operations",
	I0116 02:37:33.049215  994955 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0116 02:37:33.049219  994955 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0116 02:37:33.049226  994955 command_runner.go:130] > # 	"operations_errors",
	I0116 02:37:33.049230  994955 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0116 02:37:33.049237  994955 command_runner.go:130] > # 	"image_pulls_by_name",
	I0116 02:37:33.049241  994955 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0116 02:37:33.049248  994955 command_runner.go:130] > # 	"image_pulls_failures",
	I0116 02:37:33.049252  994955 command_runner.go:130] > # 	"image_pulls_successes",
	I0116 02:37:33.049258  994955 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0116 02:37:33.049263  994955 command_runner.go:130] > # 	"image_layer_reuse",
	I0116 02:37:33.049269  994955 command_runner.go:130] > # 	"containers_oom_total",
	I0116 02:37:33.049273  994955 command_runner.go:130] > # 	"containers_oom",
	I0116 02:37:33.049279  994955 command_runner.go:130] > # 	"processes_defunct",
	I0116 02:37:33.049283  994955 command_runner.go:130] > # 	"operations_total",
	I0116 02:37:33.049290  994955 command_runner.go:130] > # 	"operations_latency_seconds",
	I0116 02:37:33.049294  994955 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0116 02:37:33.049301  994955 command_runner.go:130] > # 	"operations_errors_total",
	I0116 02:37:33.049305  994955 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0116 02:37:33.049312  994955 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0116 02:37:33.049317  994955 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0116 02:37:33.049321  994955 command_runner.go:130] > # 	"image_pulls_success_total",
	I0116 02:37:33.049328  994955 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0116 02:37:33.049332  994955 command_runner.go:130] > # 	"containers_oom_count_total",
	I0116 02:37:33.049338  994955 command_runner.go:130] > # ]
	I0116 02:37:33.049343  994955 command_runner.go:130] > # The port on which the metrics server will listen.
	I0116 02:37:33.049349  994955 command_runner.go:130] > # metrics_port = 9090
	I0116 02:37:33.049355  994955 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0116 02:37:33.049361  994955 command_runner.go:130] > # metrics_socket = ""
	I0116 02:37:33.049366  994955 command_runner.go:130] > # The certificate for the secure metrics server.
	I0116 02:37:33.049375  994955 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0116 02:37:33.049383  994955 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0116 02:37:33.049391  994955 command_runner.go:130] > # certificate on any modification event.
	I0116 02:37:33.049394  994955 command_runner.go:130] > # metrics_cert = ""
	I0116 02:37:33.049402  994955 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0116 02:37:33.049407  994955 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0116 02:37:33.049412  994955 command_runner.go:130] > # metrics_key = ""
	I0116 02:37:33.049417  994955 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0116 02:37:33.049421  994955 command_runner.go:130] > [crio.tracing]
	I0116 02:37:33.049429  994955 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0116 02:37:33.049439  994955 command_runner.go:130] > # enable_tracing = false
	I0116 02:37:33.049447  994955 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0116 02:37:33.049458  994955 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0116 02:37:33.049467  994955 command_runner.go:130] > # Number of samples to collect per million spans.
	I0116 02:37:33.049478  994955 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0116 02:37:33.049488  994955 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0116 02:37:33.049497  994955 command_runner.go:130] > [crio.stats]
	I0116 02:37:33.049507  994955 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0116 02:37:33.049518  994955 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0116 02:37:33.049525  994955 command_runner.go:130] > # stats_collection_period = 0
	I0116 02:37:33.049788  994955 command_runner.go:130] ! time="2024-01-16 02:37:33.036296633Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I0116 02:37:33.049823  994955 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0116 02:37:33.049908  994955 cni.go:84] Creating CNI manager for ""
	I0116 02:37:33.049922  994955 cni.go:136] 3 nodes found, recommending kindnet
	I0116 02:37:33.049934  994955 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0116 02:37:33.049962  994955 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.123 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-835787 NodeName:multinode-835787-m03 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.50"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.123 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0116 02:37:33.050130  994955 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.123
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-835787-m03"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.123
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.50"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0116 02:37:33.050202  994955 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-835787-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.123
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-835787 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0116 02:37:33.050257  994955 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0116 02:37:33.059835  994955 command_runner.go:130] > kubeadm
	I0116 02:37:33.059856  994955 command_runner.go:130] > kubectl
	I0116 02:37:33.059862  994955 command_runner.go:130] > kubelet
	I0116 02:37:33.059886  994955 binaries.go:44] Found k8s binaries, skipping transfer
	I0116 02:37:33.059953  994955 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0116 02:37:33.069316  994955 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0116 02:37:33.085872  994955 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0116 02:37:33.104011  994955 ssh_runner.go:195] Run: grep 192.168.39.50	control-plane.minikube.internal$ /etc/hosts
	I0116 02:37:33.108350  994955 command_runner.go:130] > 192.168.39.50	control-plane.minikube.internal
	I0116 02:37:33.108429  994955 host.go:66] Checking if "multinode-835787" exists ...
	I0116 02:37:33.108742  994955 config.go:182] Loaded profile config "multinode-835787": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 02:37:33.108872  994955 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 02:37:33.108919  994955 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:37:33.124016  994955 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39669
	I0116 02:37:33.124490  994955 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:37:33.125048  994955 main.go:141] libmachine: Using API Version  1
	I0116 02:37:33.125075  994955 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:37:33.125456  994955 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:37:33.125668  994955 main.go:141] libmachine: (multinode-835787) Calling .DriverName
	I0116 02:37:33.125878  994955 start.go:304] JoinCluster: &{Name:multinode-835787 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.4 ClusterName:multinode-835787 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.50 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.15 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.123 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingre
ss-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 02:37:33.126011  994955 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0116 02:37:33.126030  994955 main.go:141] libmachine: (multinode-835787) Calling .GetSSHHostname
	I0116 02:37:33.128903  994955 main.go:141] libmachine: (multinode-835787) DBG | domain multinode-835787 has defined MAC address 52:54:00:20:87:3c in network mk-multinode-835787
	I0116 02:37:33.129331  994955 main.go:141] libmachine: (multinode-835787) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:87:3c", ip: ""} in network mk-multinode-835787: {Iface:virbr1 ExpiryTime:2024-01-16 03:33:24 +0000 UTC Type:0 Mac:52:54:00:20:87:3c Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:multinode-835787 Clientid:01:52:54:00:20:87:3c}
	I0116 02:37:33.129363  994955 main.go:141] libmachine: (multinode-835787) DBG | domain multinode-835787 has defined IP address 192.168.39.50 and MAC address 52:54:00:20:87:3c in network mk-multinode-835787
	I0116 02:37:33.129521  994955 main.go:141] libmachine: (multinode-835787) Calling .GetSSHPort
	I0116 02:37:33.129703  994955 main.go:141] libmachine: (multinode-835787) Calling .GetSSHKeyPath
	I0116 02:37:33.129879  994955 main.go:141] libmachine: (multinode-835787) Calling .GetSSHUsername
	I0116 02:37:33.130007  994955 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/multinode-835787/id_rsa Username:docker}
	I0116 02:37:33.343186  994955 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token n5vqby.7fxi8oic585xfvyk --discovery-token-ca-cert-hash sha256:2b2d63eda0b757db09586d44c0338fc83976b062d99727312f950d3846e4844b 
	I0116 02:37:33.345164  994955 start.go:317] removing existing worker node "m03" before attempting to rejoin cluster: &{Name:m03 IP:192.168.39.123 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}
	I0116 02:37:33.345209  994955 host.go:66] Checking if "multinode-835787" exists ...
	I0116 02:37:33.345548  994955 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 02:37:33.345599  994955 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:37:33.361585  994955 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34621
	I0116 02:37:33.362128  994955 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:37:33.362655  994955 main.go:141] libmachine: Using API Version  1
	I0116 02:37:33.362678  994955 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:37:33.363061  994955 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:37:33.363288  994955 main.go:141] libmachine: (multinode-835787) Calling .DriverName
	I0116 02:37:33.363506  994955 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl drain multinode-835787-m03 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data
	I0116 02:37:33.363532  994955 main.go:141] libmachine: (multinode-835787) Calling .GetSSHHostname
	I0116 02:37:33.366564  994955 main.go:141] libmachine: (multinode-835787) DBG | domain multinode-835787 has defined MAC address 52:54:00:20:87:3c in network mk-multinode-835787
	I0116 02:37:33.366954  994955 main.go:141] libmachine: (multinode-835787) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:87:3c", ip: ""} in network mk-multinode-835787: {Iface:virbr1 ExpiryTime:2024-01-16 03:33:24 +0000 UTC Type:0 Mac:52:54:00:20:87:3c Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:multinode-835787 Clientid:01:52:54:00:20:87:3c}
	I0116 02:37:33.366985  994955 main.go:141] libmachine: (multinode-835787) DBG | domain multinode-835787 has defined IP address 192.168.39.50 and MAC address 52:54:00:20:87:3c in network mk-multinode-835787
	I0116 02:37:33.367151  994955 main.go:141] libmachine: (multinode-835787) Calling .GetSSHPort
	I0116 02:37:33.367311  994955 main.go:141] libmachine: (multinode-835787) Calling .GetSSHKeyPath
	I0116 02:37:33.367465  994955 main.go:141] libmachine: (multinode-835787) Calling .GetSSHUsername
	I0116 02:37:33.367822  994955 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/multinode-835787/id_rsa Username:docker}
	I0116 02:37:33.526009  994955 command_runner.go:130] ! Flag --delete-local-data has been deprecated, This option is deprecated and will be deleted. Use --delete-emptydir-data.
	I0116 02:37:33.582182  994955 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-hrsvh, kube-system/kube-proxy-fpdqr
	I0116 02:37:36.604354  994955 command_runner.go:130] > node/multinode-835787-m03 cordoned
	I0116 02:37:36.604393  994955 command_runner.go:130] > pod "busybox-5bc68d56bd-ccxjq" has DeletionTimestamp older than 1 seconds, skipping
	I0116 02:37:36.604403  994955 command_runner.go:130] > node/multinode-835787-m03 drained
	I0116 02:37:36.604433  994955 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl drain multinode-835787-m03 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data: (3.240899432s)
	I0116 02:37:36.604455  994955 node.go:108] successfully drained node "m03"
	I0116 02:37:36.604848  994955 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17967-971255/kubeconfig
	I0116 02:37:36.605093  994955 kapi.go:59] client config for multinode-835787: &rest.Config{Host:"https://192.168.39.50:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17967-971255/.minikube/profiles/multinode-835787/client.crt", KeyFile:"/home/jenkins/minikube-integration/17967-971255/.minikube/profiles/multinode-835787/client.key", CAFile:"/home/jenkins/minikube-integration/17967-971255/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), N
extProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19be0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0116 02:37:36.605512  994955 request.go:1212] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0116 02:37:36.605575  994955 round_trippers.go:463] DELETE https://192.168.39.50:8443/api/v1/nodes/multinode-835787-m03
	I0116 02:37:36.605584  994955 round_trippers.go:469] Request Headers:
	I0116 02:37:36.605592  994955 round_trippers.go:473]     Content-Type: application/json
	I0116 02:37:36.605598  994955 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:37:36.605603  994955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:37:36.621136  994955 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0116 02:37:36.621166  994955 round_trippers.go:577] Response Headers:
	I0116 02:37:36.621178  994955 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:37:36 GMT
	I0116 02:37:36.621187  994955 round_trippers.go:580]     Audit-Id: 165c1562-9afb-48eb-a8c9-f6af92506ed1
	I0116 02:37:36.621195  994955 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:37:36.621203  994955 round_trippers.go:580]     Content-Type: application/json
	I0116 02:37:36.621211  994955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:37:36.621219  994955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:37:36.621227  994955 round_trippers.go:580]     Content-Length: 171
	I0116 02:37:36.621256  994955 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-835787-m03","kind":"nodes","uid":"67df5a31-bd76-4643-b628-d7570878cf19"}}
	I0116 02:37:36.621296  994955 node.go:124] successfully deleted node "m03"
	I0116 02:37:36.621312  994955 start.go:321] successfully removed existing worker node "m03" from cluster: &{Name:m03 IP:192.168.39.123 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}
	I0116 02:37:36.621342  994955 start.go:325] trying to join worker node "m03" to cluster: &{Name:m03 IP:192.168.39.123 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}
	I0116 02:37:36.621370  994955 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token n5vqby.7fxi8oic585xfvyk --discovery-token-ca-cert-hash sha256:2b2d63eda0b757db09586d44c0338fc83976b062d99727312f950d3846e4844b --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-835787-m03"
	I0116 02:37:36.692559  994955 command_runner.go:130] > [preflight] Running pre-flight checks
	I0116 02:37:36.888862  994955 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0116 02:37:36.888901  994955 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0116 02:37:36.950452  994955 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0116 02:37:36.950483  994955 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0116 02:37:36.950842  994955 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0116 02:37:37.113849  994955 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0116 02:37:37.642537  994955 command_runner.go:130] > This node has joined the cluster:
	I0116 02:37:37.642582  994955 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0116 02:37:37.642594  994955 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0116 02:37:37.642603  994955 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0116 02:37:37.644997  994955 command_runner.go:130] ! W0116 02:37:36.687451    2489 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I0116 02:37:37.645031  994955 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I0116 02:37:37.645044  994955 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I0116 02:37:37.645060  994955 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I0116 02:37:37.645086  994955 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token n5vqby.7fxi8oic585xfvyk --discovery-token-ca-cert-hash sha256:2b2d63eda0b757db09586d44c0338fc83976b062d99727312f950d3846e4844b --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-835787-m03": (1.023697195s)
	I0116 02:37:37.645117  994955 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0116 02:37:37.915653  994955 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=880ec6d67bbc7fe8882aaa6543d5a07427c973fd minikube.k8s.io/name=multinode-835787 minikube.k8s.io/updated_at=2024_01_16T02_37_37_0700 minikube.k8s.io/primary=false "-l minikube.k8s.io/primary!=true" --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:37:38.042977  994955 command_runner.go:130] > node/multinode-835787-m02 labeled
	I0116 02:37:38.058432  994955 command_runner.go:130] > node/multinode-835787-m03 labeled
	I0116 02:37:38.060185  994955 start.go:306] JoinCluster complete in 4.934298308s
	I0116 02:37:38.060218  994955 cni.go:84] Creating CNI manager for ""
	I0116 02:37:38.060226  994955 cni.go:136] 3 nodes found, recommending kindnet
	I0116 02:37:38.060293  994955 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0116 02:37:38.068061  994955 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0116 02:37:38.068097  994955 command_runner.go:130] >   Size: 2685752   	Blocks: 5248       IO Block: 4096   regular file
	I0116 02:37:38.068112  994955 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I0116 02:37:38.068122  994955 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0116 02:37:38.068132  994955 command_runner.go:130] > Access: 2024-01-16 02:33:25.428611593 +0000
	I0116 02:37:38.068140  994955 command_runner.go:130] > Modify: 2023-12-28 22:53:36.000000000 +0000
	I0116 02:37:38.068150  994955 command_runner.go:130] > Change: 2024-01-16 02:33:23.419611593 +0000
	I0116 02:37:38.068157  994955 command_runner.go:130] >  Birth: -
	I0116 02:37:38.068251  994955 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0116 02:37:38.068274  994955 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0116 02:37:38.088736  994955 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0116 02:37:38.404480  994955 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0116 02:37:38.408894  994955 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0116 02:37:38.412136  994955 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0116 02:37:38.424763  994955 command_runner.go:130] > daemonset.apps/kindnet configured
	I0116 02:37:38.427773  994955 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17967-971255/kubeconfig
	I0116 02:37:38.428141  994955 kapi.go:59] client config for multinode-835787: &rest.Config{Host:"https://192.168.39.50:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17967-971255/.minikube/profiles/multinode-835787/client.crt", KeyFile:"/home/jenkins/minikube-integration/17967-971255/.minikube/profiles/multinode-835787/client.key", CAFile:"/home/jenkins/minikube-integration/17967-971255/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), N
extProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19be0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0116 02:37:38.428611  994955 round_trippers.go:463] GET https://192.168.39.50:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0116 02:37:38.428639  994955 round_trippers.go:469] Request Headers:
	I0116 02:37:38.428650  994955 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:37:38.428662  994955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:37:38.431626  994955 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:37:38.431648  994955 round_trippers.go:577] Response Headers:
	I0116 02:37:38.431658  994955 round_trippers.go:580]     Content-Type: application/json
	I0116 02:37:38.431666  994955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:37:38.431674  994955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:37:38.431682  994955 round_trippers.go:580]     Content-Length: 291
	I0116 02:37:38.431690  994955 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:37:38 GMT
	I0116 02:37:38.431698  994955 round_trippers.go:580]     Audit-Id: 7fc66347-0ed9-4826-8664-b6ce926b1b73
	I0116 02:37:38.431719  994955 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:37:38.431768  994955 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"c3d1d02d-1d3d-4837-b3ba-04423f0d8104","resourceVersion":"927","creationTimestamp":"2024-01-16T02:23:32Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0116 02:37:38.431884  994955 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-835787" context rescaled to 1 replicas
	I0116 02:37:38.431923  994955 start.go:223] Will wait 6m0s for node &{Name:m03 IP:192.168.39.123 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}
	I0116 02:37:38.434072  994955 out.go:177] * Verifying Kubernetes components...
	I0116 02:37:38.435381  994955 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 02:37:38.454112  994955 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17967-971255/kubeconfig
	I0116 02:37:38.454384  994955 kapi.go:59] client config for multinode-835787: &rest.Config{Host:"https://192.168.39.50:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17967-971255/.minikube/profiles/multinode-835787/client.crt", KeyFile:"/home/jenkins/minikube-integration/17967-971255/.minikube/profiles/multinode-835787/client.key", CAFile:"/home/jenkins/minikube-integration/17967-971255/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), N
extProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19be0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0116 02:37:38.454759  994955 node_ready.go:35] waiting up to 6m0s for node "multinode-835787-m03" to be "Ready" ...
	I0116 02:37:38.454925  994955 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-835787-m03
	I0116 02:37:38.454944  994955 round_trippers.go:469] Request Headers:
	I0116 02:37:38.454965  994955 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:37:38.454993  994955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:37:38.457924  994955 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:37:38.457949  994955 round_trippers.go:577] Response Headers:
	I0116 02:37:38.457959  994955 round_trippers.go:580]     Audit-Id: 389ee2bc-7895-45a3-a71d-a05af27b2e98
	I0116 02:37:38.457968  994955 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:37:38.457977  994955 round_trippers.go:580]     Content-Type: application/json
	I0116 02:37:38.457985  994955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:37:38.457992  994955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:37:38.458000  994955 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:37:38 GMT
	I0116 02:37:38.458144  994955 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-835787-m03","uid":"3270592c-0454-4fff-a541-d03efa6d9642","resourceVersion":"1259","creationTimestamp":"2024-01-16T02:37:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-835787-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-835787","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T02_37_37_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:37:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta
data":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec": [truncated 3994 chars]
	I0116 02:37:38.458465  994955 node_ready.go:49] node "multinode-835787-m03" has status "Ready":"True"
	I0116 02:37:38.458484  994955 node_ready.go:38] duration metric: took 3.698006ms waiting for node "multinode-835787-m03" to be "Ready" ...
	I0116 02:37:38.458493  994955 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 02:37:38.458561  994955 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods
	I0116 02:37:38.458570  994955 round_trippers.go:469] Request Headers:
	I0116 02:37:38.458577  994955 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:37:38.458583  994955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:37:38.463112  994955 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0116 02:37:38.463138  994955 round_trippers.go:577] Response Headers:
	I0116 02:37:38.463149  994955 round_trippers.go:580]     Audit-Id: c1e84ee3-928f-4c51-9121-a196c10a64c8
	I0116 02:37:38.463157  994955 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:37:38.463165  994955 round_trippers.go:580]     Content-Type: application/json
	I0116 02:37:38.463172  994955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:37:38.463181  994955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:37:38.463189  994955 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:37:38 GMT
	I0116 02:37:38.464244  994955 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1263"},"items":[{"metadata":{"name":"coredns-5dd5756b68-965sn","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"a0898f09-1a64-4beb-bfbf-de15f2e07038","resourceVersion":"922","creationTimestamp":"2024-01-16T02:23:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a765b12d-1df2-4d20-a0f7-7471371f2fe7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:23:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a765b12d-1df2-4d20-a0f7-7471371f2fe7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"
f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers": [truncated 81883 chars]
	I0116 02:37:38.467126  994955 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-965sn" in "kube-system" namespace to be "Ready" ...
	I0116 02:37:38.467221  994955 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-965sn
	I0116 02:37:38.467230  994955 round_trippers.go:469] Request Headers:
	I0116 02:37:38.467238  994955 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:37:38.467244  994955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:37:38.469956  994955 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:37:38.469984  994955 round_trippers.go:577] Response Headers:
	I0116 02:37:38.469995  994955 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:37:38 GMT
	I0116 02:37:38.470003  994955 round_trippers.go:580]     Audit-Id: 7cf0d642-eba3-4941-b9d6-b7d71e702ea1
	I0116 02:37:38.470011  994955 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:37:38.470020  994955 round_trippers.go:580]     Content-Type: application/json
	I0116 02:37:38.470029  994955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:37:38.470038  994955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:37:38.470174  994955 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-965sn","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"a0898f09-1a64-4beb-bfbf-de15f2e07038","resourceVersion":"922","creationTimestamp":"2024-01-16T02:23:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a765b12d-1df2-4d20-a0f7-7471371f2fe7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:23:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a765b12d-1df2-4d20-a0f7-7471371f2fe7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6264 chars]
	I0116 02:37:38.470689  994955 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-835787
	I0116 02:37:38.470705  994955 round_trippers.go:469] Request Headers:
	I0116 02:37:38.470713  994955 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:37:38.470719  994955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:37:38.473036  994955 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:37:38.473057  994955 round_trippers.go:577] Response Headers:
	I0116 02:37:38.473065  994955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:37:38.473071  994955 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:37:38 GMT
	I0116 02:37:38.473076  994955 round_trippers.go:580]     Audit-Id: 63b67add-eda6-4782-81c8-72dbecac0082
	I0116 02:37:38.473081  994955 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:37:38.473086  994955 round_trippers.go:580]     Content-Type: application/json
	I0116 02:37:38.473092  994955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:37:38.473293  994955 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-835787","uid":"7ae74749-584e-4cbc-92c4-0e5f2539761e","resourceVersion":"934","creationTimestamp":"2024-01-16T02:23:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-835787","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-835787","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_23_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:23:29Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I0116 02:37:38.473742  994955 pod_ready.go:92] pod "coredns-5dd5756b68-965sn" in "kube-system" namespace has status "Ready":"True"
	I0116 02:37:38.473771  994955 pod_ready.go:81] duration metric: took 6.612677ms waiting for pod "coredns-5dd5756b68-965sn" in "kube-system" namespace to be "Ready" ...
	I0116 02:37:38.473785  994955 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-835787" in "kube-system" namespace to be "Ready" ...
	I0116 02:37:38.473883  994955 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-835787
	I0116 02:37:38.473895  994955 round_trippers.go:469] Request Headers:
	I0116 02:37:38.473906  994955 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:37:38.473915  994955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:37:38.476333  994955 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:37:38.476356  994955 round_trippers.go:577] Response Headers:
	I0116 02:37:38.476365  994955 round_trippers.go:580]     Audit-Id: 89ab4658-c1d8-40e4-b08a-db1af25883e8
	I0116 02:37:38.476374  994955 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:37:38.476382  994955 round_trippers.go:580]     Content-Type: application/json
	I0116 02:37:38.476390  994955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:37:38.476398  994955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:37:38.476406  994955 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:37:38 GMT
	I0116 02:37:38.476873  994955 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-835787","namespace":"kube-system","uid":"ccb51de1-d565-42b0-bd30-9b1acb1c725d","resourceVersion":"879","creationTimestamp":"2024-01-16T02:23:33Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.50:2379","kubernetes.io/config.hash":"108085f55363e386b9f9c083ac579444","kubernetes.io/config.mirror":"108085f55363e386b9f9c083ac579444","kubernetes.io/config.seen":"2024-01-16T02:23:33.032941198Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-835787","uid":"7ae74749-584e-4cbc-92c4-0e5f2539761e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:23:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 5843 chars]
	I0116 02:37:38.477241  994955 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-835787
	I0116 02:37:38.477254  994955 round_trippers.go:469] Request Headers:
	I0116 02:37:38.477261  994955 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:37:38.477267  994955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:37:38.480944  994955 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 02:37:38.480973  994955 round_trippers.go:577] Response Headers:
	I0116 02:37:38.480984  994955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:37:38.480993  994955 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:37:38 GMT
	I0116 02:37:38.481001  994955 round_trippers.go:580]     Audit-Id: 355929ce-03c4-448d-8d92-75bdfc3cbfda
	I0116 02:37:38.481013  994955 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:37:38.481024  994955 round_trippers.go:580]     Content-Type: application/json
	I0116 02:37:38.481032  994955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:37:38.481267  994955 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-835787","uid":"7ae74749-584e-4cbc-92c4-0e5f2539761e","resourceVersion":"934","creationTimestamp":"2024-01-16T02:23:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-835787","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-835787","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_23_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:23:29Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I0116 02:37:38.481664  994955 pod_ready.go:92] pod "etcd-multinode-835787" in "kube-system" namespace has status "Ready":"True"
	I0116 02:37:38.481685  994955 pod_ready.go:81] duration metric: took 7.888952ms waiting for pod "etcd-multinode-835787" in "kube-system" namespace to be "Ready" ...
	I0116 02:37:38.481705  994955 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-835787" in "kube-system" namespace to be "Ready" ...
	I0116 02:37:38.481780  994955 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-835787
	I0116 02:37:38.481787  994955 round_trippers.go:469] Request Headers:
	I0116 02:37:38.481797  994955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:37:38.481821  994955 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:37:38.484182  994955 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:37:38.484199  994955 round_trippers.go:577] Response Headers:
	I0116 02:37:38.484206  994955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:37:38.484211  994955 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:37:38 GMT
	I0116 02:37:38.484216  994955 round_trippers.go:580]     Audit-Id: 51452b4c-e046-4161-bf6e-f872e69d764c
	I0116 02:37:38.484221  994955 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:37:38.484226  994955 round_trippers.go:580]     Content-Type: application/json
	I0116 02:37:38.484232  994955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:37:38.484386  994955 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-835787","namespace":"kube-system","uid":"9c26db11-7208-4540-8a73-407a6edd3a0b","resourceVersion":"893","creationTimestamp":"2024-01-16T02:23:33Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.50:8443","kubernetes.io/config.hash":"b27880b6b81ca11dc023b4901941ff6f","kubernetes.io/config.mirror":"b27880b6b81ca11dc023b4901941ff6f","kubernetes.io/config.seen":"2024-01-16T02:23:33.032945135Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-835787","uid":"7ae74749-584e-4cbc-92c4-0e5f2539761e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:23:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7380 chars]
	I0116 02:37:38.484910  994955 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-835787
	I0116 02:37:38.484931  994955 round_trippers.go:469] Request Headers:
	I0116 02:37:38.484942  994955 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:37:38.484951  994955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:37:38.487355  994955 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:37:38.487371  994955 round_trippers.go:577] Response Headers:
	I0116 02:37:38.487377  994955 round_trippers.go:580]     Content-Type: application/json
	I0116 02:37:38.487383  994955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:37:38.487390  994955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:37:38.487399  994955 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:37:38 GMT
	I0116 02:37:38.487411  994955 round_trippers.go:580]     Audit-Id: 97bec16e-10fa-4e1d-b7b5-fcc9bb407e17
	I0116 02:37:38.487422  994955 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:37:38.487763  994955 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-835787","uid":"7ae74749-584e-4cbc-92c4-0e5f2539761e","resourceVersion":"934","creationTimestamp":"2024-01-16T02:23:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-835787","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-835787","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_23_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:23:29Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I0116 02:37:38.488177  994955 pod_ready.go:92] pod "kube-apiserver-multinode-835787" in "kube-system" namespace has status "Ready":"True"
	I0116 02:37:38.488198  994955 pod_ready.go:81] duration metric: took 6.484514ms waiting for pod "kube-apiserver-multinode-835787" in "kube-system" namespace to be "Ready" ...
	I0116 02:37:38.488209  994955 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-835787" in "kube-system" namespace to be "Ready" ...
	I0116 02:37:38.488285  994955 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-835787
	I0116 02:37:38.488295  994955 round_trippers.go:469] Request Headers:
	I0116 02:37:38.488306  994955 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:37:38.488321  994955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:37:38.490374  994955 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:37:38.490398  994955 round_trippers.go:577] Response Headers:
	I0116 02:37:38.490407  994955 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:37:38.490415  994955 round_trippers.go:580]     Content-Type: application/json
	I0116 02:37:38.490422  994955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:37:38.490430  994955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:37:38.490437  994955 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:37:38 GMT
	I0116 02:37:38.490448  994955 round_trippers.go:580]     Audit-Id: ddb9819e-5b9d-4548-a5ce-40e792b3ac60
	I0116 02:37:38.490767  994955 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-835787","namespace":"kube-system","uid":"daf9e312-54ad-4a4e-b334-9b84e55f8fef","resourceVersion":"885","creationTimestamp":"2024-01-16T02:23:33Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"6adb137abb6e7ac4dcf8e50e41a3773b","kubernetes.io/config.mirror":"6adb137abb6e7ac4dcf8e50e41a3773b","kubernetes.io/config.seen":"2024-01-16T02:23:33.032946146Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-835787","uid":"7ae74749-584e-4cbc-92c4-0e5f2539761e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:23:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6950 chars]
	I0116 02:37:38.491253  994955 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-835787
	I0116 02:37:38.491267  994955 round_trippers.go:469] Request Headers:
	I0116 02:37:38.491274  994955 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:37:38.491280  994955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:37:38.493360  994955 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:37:38.493380  994955 round_trippers.go:577] Response Headers:
	I0116 02:37:38.493389  994955 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:37:38.493399  994955 round_trippers.go:580]     Content-Type: application/json
	I0116 02:37:38.493407  994955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:37:38.493415  994955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:37:38.493422  994955 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:37:38 GMT
	I0116 02:37:38.493430  994955 round_trippers.go:580]     Audit-Id: 0510102e-1ce5-46ca-83f0-aed2e2f827d5
	I0116 02:37:38.493747  994955 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-835787","uid":"7ae74749-584e-4cbc-92c4-0e5f2539761e","resourceVersion":"934","creationTimestamp":"2024-01-16T02:23:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-835787","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-835787","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_23_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:23:29Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I0116 02:37:38.494169  994955 pod_ready.go:92] pod "kube-controller-manager-multinode-835787" in "kube-system" namespace has status "Ready":"True"
	I0116 02:37:38.494192  994955 pod_ready.go:81] duration metric: took 5.973533ms waiting for pod "kube-controller-manager-multinode-835787" in "kube-system" namespace to be "Ready" ...
	I0116 02:37:38.494205  994955 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fpdqr" in "kube-system" namespace to be "Ready" ...
	I0116 02:37:38.655346  994955 request.go:629] Waited for 160.964323ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fpdqr
	I0116 02:37:38.655433  994955 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fpdqr
	I0116 02:37:38.655441  994955 round_trippers.go:469] Request Headers:
	I0116 02:37:38.655449  994955 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:37:38.655455  994955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:37:38.658494  994955 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 02:37:38.658533  994955 round_trippers.go:577] Response Headers:
	I0116 02:37:38.658547  994955 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:37:38 GMT
	I0116 02:37:38.658556  994955 round_trippers.go:580]     Audit-Id: 8e3f097a-1fe5-4165-a7d6-016de06ecdc6
	I0116 02:37:38.658564  994955 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:37:38.658572  994955 round_trippers.go:580]     Content-Type: application/json
	I0116 02:37:38.658581  994955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:37:38.658599  994955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:37:38.658764  994955 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-fpdqr","generateName":"kube-proxy-","namespace":"kube-system","uid":"42b74cbd-93d8-4ac7-9071-112d5e7c572b","resourceVersion":"1237","creationTimestamp":"2024-01-16T02:25:22Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"c1b2dbfd-aa15-48f4-b587-dbae275fb5c5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:25:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c1b2dbfd-aa15-48f4-b587-dbae275fb5c5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5731 chars]
	I0116 02:37:38.855552  994955 request.go:629] Waited for 196.238382ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.50:8443/api/v1/nodes/multinode-835787-m03
	I0116 02:37:38.855639  994955 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-835787-m03
	I0116 02:37:38.855644  994955 round_trippers.go:469] Request Headers:
	I0116 02:37:38.855653  994955 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:37:38.855659  994955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:37:38.859417  994955 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 02:37:38.859453  994955 round_trippers.go:577] Response Headers:
	I0116 02:37:38.859464  994955 round_trippers.go:580]     Audit-Id: c53e81f2-81cf-49f2-a620-448b868e322c
	I0116 02:37:38.859472  994955 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:37:38.859480  994955 round_trippers.go:580]     Content-Type: application/json
	I0116 02:37:38.859489  994955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:37:38.859498  994955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:37:38.859506  994955 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:37:38 GMT
	I0116 02:37:38.859685  994955 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-835787-m03","uid":"3270592c-0454-4fff-a541-d03efa6d9642","resourceVersion":"1259","creationTimestamp":"2024-01-16T02:37:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-835787-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-835787","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T02_37_37_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:37:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta
data":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec": [truncated 3994 chars]
	I0116 02:37:38.860078  994955 pod_ready.go:92] pod "kube-proxy-fpdqr" in "kube-system" namespace has status "Ready":"True"
	I0116 02:37:38.860105  994955 pod_ready.go:81] duration metric: took 365.883546ms waiting for pod "kube-proxy-fpdqr" in "kube-system" namespace to be "Ready" ...
	I0116 02:37:38.860119  994955 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-gbvc2" in "kube-system" namespace to be "Ready" ...
	I0116 02:37:39.054967  994955 request.go:629] Waited for 194.697122ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gbvc2
	I0116 02:37:39.055047  994955 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gbvc2
	I0116 02:37:39.055053  994955 round_trippers.go:469] Request Headers:
	I0116 02:37:39.055061  994955 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:37:39.055067  994955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:37:39.057984  994955 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:37:39.058023  994955 round_trippers.go:577] Response Headers:
	I0116 02:37:39.058035  994955 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:37:39.058045  994955 round_trippers.go:580]     Content-Type: application/json
	I0116 02:37:39.058053  994955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:37:39.058063  994955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:37:39.058072  994955 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:37:39 GMT
	I0116 02:37:39.058083  994955 round_trippers.go:580]     Audit-Id: fc47f20c-b9df-425f-81bd-bb14d54186d5
	I0116 02:37:39.058294  994955 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-gbvc2","generateName":"kube-proxy-","namespace":"kube-system","uid":"74d63696-cb46-484d-937b-8883e6f1df06","resourceVersion":"824","creationTimestamp":"2024-01-16T02:23:45Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"c1b2dbfd-aa15-48f4-b587-dbae275fb5c5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:23:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c1b2dbfd-aa15-48f4-b587-dbae275fb5c5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5514 chars]
	I0116 02:37:39.255668  994955 request.go:629] Waited for 196.792749ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.50:8443/api/v1/nodes/multinode-835787
	I0116 02:37:39.255736  994955 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-835787
	I0116 02:37:39.255747  994955 round_trippers.go:469] Request Headers:
	I0116 02:37:39.255755  994955 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:37:39.255761  994955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:37:39.258470  994955 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:37:39.258499  994955 round_trippers.go:577] Response Headers:
	I0116 02:37:39.258508  994955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:37:39.258515  994955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:37:39.258523  994955 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:37:39 GMT
	I0116 02:37:39.258529  994955 round_trippers.go:580]     Audit-Id: cf3d11a3-fa2b-4086-9007-16588728be67
	I0116 02:37:39.258536  994955 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:37:39.258545  994955 round_trippers.go:580]     Content-Type: application/json
	I0116 02:37:39.258829  994955 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-835787","uid":"7ae74749-584e-4cbc-92c4-0e5f2539761e","resourceVersion":"934","creationTimestamp":"2024-01-16T02:23:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-835787","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-835787","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_23_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:23:29Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I0116 02:37:39.259222  994955 pod_ready.go:92] pod "kube-proxy-gbvc2" in "kube-system" namespace has status "Ready":"True"
	I0116 02:37:39.259243  994955 pod_ready.go:81] duration metric: took 399.112324ms waiting for pod "kube-proxy-gbvc2" in "kube-system" namespace to be "Ready" ...
	I0116 02:37:39.259256  994955 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hxx8p" in "kube-system" namespace to be "Ready" ...
	I0116 02:37:39.455293  994955 request.go:629] Waited for 195.947265ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hxx8p
	I0116 02:37:39.455376  994955 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hxx8p
	I0116 02:37:39.455385  994955 round_trippers.go:469] Request Headers:
	I0116 02:37:39.455397  994955 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:37:39.455408  994955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:37:39.459225  994955 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 02:37:39.459254  994955 round_trippers.go:577] Response Headers:
	I0116 02:37:39.459269  994955 round_trippers.go:580]     Audit-Id: 2db11dde-8403-4991-8390-06fced4f2dd5
	I0116 02:37:39.459279  994955 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:37:39.459295  994955 round_trippers.go:580]     Content-Type: application/json
	I0116 02:37:39.459304  994955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:37:39.459316  994955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:37:39.459325  994955 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:37:39 GMT
	I0116 02:37:39.459466  994955 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-hxx8p","generateName":"kube-proxy-","namespace":"kube-system","uid":"9c35aa68-14ac-41e1-81f8-8fdb0c48d9f1","resourceVersion":"1091","creationTimestamp":"2024-01-16T02:24:32Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"c1b2dbfd-aa15-48f4-b587-dbae275fb5c5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:24:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c1b2dbfd-aa15-48f4-b587-dbae275fb5c5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5727 chars]
	I0116 02:37:39.655442  994955 request.go:629] Waited for 195.413079ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.50:8443/api/v1/nodes/multinode-835787-m02
	I0116 02:37:39.655537  994955 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-835787-m02
	I0116 02:37:39.655548  994955 round_trippers.go:469] Request Headers:
	I0116 02:37:39.655561  994955 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:37:39.655570  994955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:37:39.658436  994955 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:37:39.658461  994955 round_trippers.go:577] Response Headers:
	I0116 02:37:39.658468  994955 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:37:39 GMT
	I0116 02:37:39.658474  994955 round_trippers.go:580]     Audit-Id: aea0d09a-e50e-4eb3-95c2-ee172bbf3db7
	I0116 02:37:39.658479  994955 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:37:39.658487  994955 round_trippers.go:580]     Content-Type: application/json
	I0116 02:37:39.658499  994955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:37:39.658509  994955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:37:39.658951  994955 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-835787-m02","uid":"7ba249a5-ba94-4ff4-a7a8-df4d380c08dc","resourceVersion":"1258","creationTimestamp":"2024-01-16T02:35:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-835787-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-835787","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T02_37_37_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:35:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta
data":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec": [truncated 3993 chars]
	I0116 02:37:39.659278  994955 pod_ready.go:92] pod "kube-proxy-hxx8p" in "kube-system" namespace has status "Ready":"True"
	I0116 02:37:39.659298  994955 pod_ready.go:81] duration metric: took 400.034857ms waiting for pod "kube-proxy-hxx8p" in "kube-system" namespace to be "Ready" ...
	I0116 02:37:39.659308  994955 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-835787" in "kube-system" namespace to be "Ready" ...
	I0116 02:37:39.855917  994955 request.go:629] Waited for 196.502934ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-835787
	I0116 02:37:39.855991  994955 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-835787
	I0116 02:37:39.855996  994955 round_trippers.go:469] Request Headers:
	I0116 02:37:39.856004  994955 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:37:39.856011  994955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:37:39.859002  994955 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:37:39.859023  994955 round_trippers.go:577] Response Headers:
	I0116 02:37:39.859030  994955 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:37:39 GMT
	I0116 02:37:39.859036  994955 round_trippers.go:580]     Audit-Id: 9bbe3b11-2911-47f4-9c3b-8250edd8de22
	I0116 02:37:39.859041  994955 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:37:39.859047  994955 round_trippers.go:580]     Content-Type: application/json
	I0116 02:37:39.859055  994955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:37:39.859060  994955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:37:39.859270  994955 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-835787","namespace":"kube-system","uid":"7b9c28cc-6e78-413a-af72-511714d2462e","resourceVersion":"908","creationTimestamp":"2024-01-16T02:23:33Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"230f2dad53142209ac2ae48ed27aa7b4","kubernetes.io/config.mirror":"230f2dad53142209ac2ae48ed27aa7b4","kubernetes.io/config.seen":"2024-01-16T02:23:33.032947019Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-835787","uid":"7ae74749-584e-4cbc-92c4-0e5f2539761e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:23:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4680 chars]
	I0116 02:37:40.056019  994955 request.go:629] Waited for 196.353002ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.50:8443/api/v1/nodes/multinode-835787
	I0116 02:37:40.056093  994955 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-835787
	I0116 02:37:40.056098  994955 round_trippers.go:469] Request Headers:
	I0116 02:37:40.056106  994955 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:37:40.056112  994955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:37:40.059086  994955 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:37:40.059112  994955 round_trippers.go:577] Response Headers:
	I0116 02:37:40.059122  994955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:37:40.059131  994955 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:37:40 GMT
	I0116 02:37:40.059138  994955 round_trippers.go:580]     Audit-Id: ee1c8445-7fe0-4387-bdf9-8407bc07cec4
	I0116 02:37:40.059146  994955 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:37:40.059158  994955 round_trippers.go:580]     Content-Type: application/json
	I0116 02:37:40.059167  994955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:37:40.059318  994955 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-835787","uid":"7ae74749-584e-4cbc-92c4-0e5f2539761e","resourceVersion":"934","creationTimestamp":"2024-01-16T02:23:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-835787","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-835787","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_23_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:23:29Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I0116 02:37:40.059750  994955 pod_ready.go:92] pod "kube-scheduler-multinode-835787" in "kube-system" namespace has status "Ready":"True"
	I0116 02:37:40.059772  994955 pod_ready.go:81] duration metric: took 400.451234ms waiting for pod "kube-scheduler-multinode-835787" in "kube-system" namespace to be "Ready" ...
	I0116 02:37:40.059784  994955 pod_ready.go:38] duration metric: took 1.601280901s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 02:37:40.059799  994955 system_svc.go:44] waiting for kubelet service to be running ....
	I0116 02:37:40.059850  994955 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 02:37:40.073580  994955 system_svc.go:56] duration metric: took 13.77137ms WaitForService to wait for kubelet.
	I0116 02:37:40.073605  994955 kubeadm.go:581] duration metric: took 1.641650972s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0116 02:37:40.073626  994955 node_conditions.go:102] verifying NodePressure condition ...
	I0116 02:37:40.255028  994955 request.go:629] Waited for 181.304416ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.50:8443/api/v1/nodes
	I0116 02:37:40.255094  994955 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes
	I0116 02:37:40.255099  994955 round_trippers.go:469] Request Headers:
	I0116 02:37:40.255107  994955 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:37:40.255113  994955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:37:40.258013  994955 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:37:40.258046  994955 round_trippers.go:577] Response Headers:
	I0116 02:37:40.258056  994955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 460f3a3f-dc65-4953-ae15-e5f4c6509d88
	I0116 02:37:40.258064  994955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 30dde756-1ff7-43b9-ad55-23b38d28810d
	I0116 02:37:40.258071  994955 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:37:40 GMT
	I0116 02:37:40.258080  994955 round_trippers.go:580]     Audit-Id: ea8df8f8-b4a4-4bb4-911c-d0e666bbc9d6
	I0116 02:37:40.258088  994955 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:37:40.258100  994955 round_trippers.go:580]     Content-Type: application/json
	I0116 02:37:40.258364  994955 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1272"},"items":[{"metadata":{"name":"multinode-835787","uid":"7ae74749-584e-4cbc-92c4-0e5f2539761e","resourceVersion":"934","creationTimestamp":"2024-01-16T02:23:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-835787","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-835787","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_23_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedField
s":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time": [truncated 16237 chars]
	I0116 02:37:40.259053  994955 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 02:37:40.259079  994955 node_conditions.go:123] node cpu capacity is 2
	I0116 02:37:40.259090  994955 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 02:37:40.259094  994955 node_conditions.go:123] node cpu capacity is 2
	I0116 02:37:40.259098  994955 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 02:37:40.259103  994955 node_conditions.go:123] node cpu capacity is 2
	I0116 02:37:40.259107  994955 node_conditions.go:105] duration metric: took 185.477133ms to run NodePressure ...
	I0116 02:37:40.259118  994955 start.go:228] waiting for startup goroutines ...
	I0116 02:37:40.259144  994955 start.go:242] writing updated cluster config ...
	I0116 02:37:40.259438  994955 ssh_runner.go:195] Run: rm -f paused
	I0116 02:37:40.315412  994955 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I0116 02:37:40.317651  994955 out.go:177] * Done! kubectl is now configured to use "multinode-835787" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	-- Journal begins at Tue 2024-01-16 02:33:24 UTC, ends at Tue 2024-01-16 02:37:41 UTC. --
	Jan 16 02:37:41 multinode-835787 crio[706]: time="2024-01-16 02:37:41.515382684Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705372661515369165,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125543,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=d7fec255-b70e-4411-97e1-453a3b05152e name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 02:37:41 multinode-835787 crio[706]: time="2024-01-16 02:37:41.515940371Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=3d9386fe-0d82-4013-93e9-14131cb83916 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 02:37:41 multinode-835787 crio[706]: time="2024-01-16 02:37:41.515991448Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=3d9386fe-0d82-4013-93e9-14131cb83916 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 02:37:41 multinode-835787 crio[706]: time="2024-01-16 02:37:41.516216204Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:47b3bf3ef29e3d0ab3da044e5a6da23737e4f976686a54de6ed936d10c4a3d92,PodSandboxId:a613e34381d40457409cc85037602d07a39d2f23ee501bf5b9cc49e5bee843e4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705372469369447347,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d18fde8-ca44-4257-8475-100cd8b34ef8,},Annotations:map[string]string{io.kubernetes.container.hash: f55ed3e6,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:205b1bc3281d8e24ca47c53a0a4b97bfeb95671ac9ae834aafad552e9d27b981,PodSandboxId:f9af33ea4100aaf2ab73dfe903a0ac35370329872ebcea5198005025ff5ce917,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1705372454810886143,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-f6p29,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: de7231c8-3c4b-4fe1-a720-0e2b00c3881f,},Annotations:map[string]string{io.kubernetes.container.hash: c0b7e940,io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abd96ff8efc62238cefe253553c413c61caae2dbc07f5e9085a950f816b10e69,PodSandboxId:e5f124f074ade7d414d7dc6473ac30d924975b7d6a112b835892068841776491,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1705372453936950858,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-965sn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0898f09-1a64-4beb-bfbf-de15f2e07038,},Annotations:map[string]string{io.kubernetes.container.hash: 351cd70e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:076e448a280aebd37d1ff57718a65f81f2723d05bba14a4212d6392b46bce20d,PodSandboxId:d8a10daad526ea54ff01afa0a4b1ba0f5a442c1a84c0d2c37b7d7fd5f3072c7e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1705372440673316940,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-755b9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: ee1ea8c4-abfe-4fea-9f71-32840f6790ed,},Annotations:map[string]string{io.kubernetes.container.hash: adda7158,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac8affac82ae7c9a7460e2f37b2f09f3a79c0bbe761ff5aac94ebc6e82940c9c,PodSandboxId:a613e34381d40457409cc85037602d07a39d2f23ee501bf5b9cc49e5bee843e4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1705372438393993622,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 2d18fde8-ca44-4257-8475-100cd8b34ef8,},Annotations:map[string]string{io.kubernetes.container.hash: f55ed3e6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8bfd72dc7e5a32ef3710529615905589e1dd6d1f5936b97b09b15aab8cd17230,PodSandboxId:d98284d4d0e8fc9351b1059d2b1db093ce19f44310e815159c9e4ef44fb48840,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1705372438081171023,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gbvc2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74d63696-cb46-484d-937b-8883e6f1
df06,},Annotations:map[string]string{io.kubernetes.container.hash: f2bc5e57,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d5a74aeb46c785236bc3310f611922f8b4c7dac4110996ace3c9b8bea7c63c3,PodSandboxId:8b84d5add46d1ac0b91e4438c910e39cbfae61fde6e743116c190e67bcb64af2,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1705372431856118723,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-835787,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 108085f55363e386b9f9c083ac579444,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 156d26bf,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cce034e2a06a9e239cf0d3150cccb16dc5e88d13a69eb29fa1b5108e9482bad3,PodSandboxId:b843856fba195f477b4eecb317a8763ce9d201be623f205620c00843f9f25e2b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1705372431564855118,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-835787,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 230f2dad53142209ac2ae48ed27aa7b4,},Annotations:map[string]string{io.kubernetes.container.has
h: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53d280ab48f7ec84d1ace435bbef94cfafec4166b2fec47a752b7b373d7b3b43,PodSandboxId:c2a711c4665016bb5a806666c61c8acdb9ee82b40724c4b86cf3d0454cc53bd0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1705372431084217797,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-835787,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6adb137abb6e7ac4dcf8e50e41a3773b,},Annotations:map[string]string{io.
kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:703a397fb99dd1a2810707b748a39682300febfa99639618ef835668dfb6f033,PodSandboxId:6adca81c2a783bc48ffd2492bc79655c7e0215f8668392424d3dff2b5cc8ff6d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1705372431135017013,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-835787,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b27880b6b81ca11dc023b4901941ff6f,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: dfe7758e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=3d9386fe-0d82-4013-93e9-14131cb83916 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 02:37:41 multinode-835787 crio[706]: time="2024-01-16 02:37:41.555286795Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=69eeba26-207e-4b60-9a4f-da3344270b59 name=/runtime.v1.RuntimeService/Version
	Jan 16 02:37:41 multinode-835787 crio[706]: time="2024-01-16 02:37:41.555349456Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=69eeba26-207e-4b60-9a4f-da3344270b59 name=/runtime.v1.RuntimeService/Version
	Jan 16 02:37:41 multinode-835787 crio[706]: time="2024-01-16 02:37:41.556329022Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=13494948-e040-4520-a4a7-4f2e5927ea4d name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 02:37:41 multinode-835787 crio[706]: time="2024-01-16 02:37:41.556768013Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705372661556750891,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125543,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=13494948-e040-4520-a4a7-4f2e5927ea4d name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 02:37:41 multinode-835787 crio[706]: time="2024-01-16 02:37:41.557531657Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=e8a5e20a-ad3c-4d4b-96c1-8caba4f0292b name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 02:37:41 multinode-835787 crio[706]: time="2024-01-16 02:37:41.557577053Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=e8a5e20a-ad3c-4d4b-96c1-8caba4f0292b name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 02:37:41 multinode-835787 crio[706]: time="2024-01-16 02:37:41.557856223Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:47b3bf3ef29e3d0ab3da044e5a6da23737e4f976686a54de6ed936d10c4a3d92,PodSandboxId:a613e34381d40457409cc85037602d07a39d2f23ee501bf5b9cc49e5bee843e4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705372469369447347,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d18fde8-ca44-4257-8475-100cd8b34ef8,},Annotations:map[string]string{io.kubernetes.container.hash: f55ed3e6,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:205b1bc3281d8e24ca47c53a0a4b97bfeb95671ac9ae834aafad552e9d27b981,PodSandboxId:f9af33ea4100aaf2ab73dfe903a0ac35370329872ebcea5198005025ff5ce917,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1705372454810886143,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-f6p29,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: de7231c8-3c4b-4fe1-a720-0e2b00c3881f,},Annotations:map[string]string{io.kubernetes.container.hash: c0b7e940,io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abd96ff8efc62238cefe253553c413c61caae2dbc07f5e9085a950f816b10e69,PodSandboxId:e5f124f074ade7d414d7dc6473ac30d924975b7d6a112b835892068841776491,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1705372453936950858,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-965sn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0898f09-1a64-4beb-bfbf-de15f2e07038,},Annotations:map[string]string{io.kubernetes.container.hash: 351cd70e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:076e448a280aebd37d1ff57718a65f81f2723d05bba14a4212d6392b46bce20d,PodSandboxId:d8a10daad526ea54ff01afa0a4b1ba0f5a442c1a84c0d2c37b7d7fd5f3072c7e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1705372440673316940,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-755b9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: ee1ea8c4-abfe-4fea-9f71-32840f6790ed,},Annotations:map[string]string{io.kubernetes.container.hash: adda7158,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac8affac82ae7c9a7460e2f37b2f09f3a79c0bbe761ff5aac94ebc6e82940c9c,PodSandboxId:a613e34381d40457409cc85037602d07a39d2f23ee501bf5b9cc49e5bee843e4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1705372438393993622,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 2d18fde8-ca44-4257-8475-100cd8b34ef8,},Annotations:map[string]string{io.kubernetes.container.hash: f55ed3e6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8bfd72dc7e5a32ef3710529615905589e1dd6d1f5936b97b09b15aab8cd17230,PodSandboxId:d98284d4d0e8fc9351b1059d2b1db093ce19f44310e815159c9e4ef44fb48840,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1705372438081171023,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gbvc2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74d63696-cb46-484d-937b-8883e6f1
df06,},Annotations:map[string]string{io.kubernetes.container.hash: f2bc5e57,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d5a74aeb46c785236bc3310f611922f8b4c7dac4110996ace3c9b8bea7c63c3,PodSandboxId:8b84d5add46d1ac0b91e4438c910e39cbfae61fde6e743116c190e67bcb64af2,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1705372431856118723,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-835787,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 108085f55363e386b9f9c083ac579444,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 156d26bf,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cce034e2a06a9e239cf0d3150cccb16dc5e88d13a69eb29fa1b5108e9482bad3,PodSandboxId:b843856fba195f477b4eecb317a8763ce9d201be623f205620c00843f9f25e2b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1705372431564855118,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-835787,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 230f2dad53142209ac2ae48ed27aa7b4,},Annotations:map[string]string{io.kubernetes.container.has
h: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53d280ab48f7ec84d1ace435bbef94cfafec4166b2fec47a752b7b373d7b3b43,PodSandboxId:c2a711c4665016bb5a806666c61c8acdb9ee82b40724c4b86cf3d0454cc53bd0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1705372431084217797,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-835787,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6adb137abb6e7ac4dcf8e50e41a3773b,},Annotations:map[string]string{io.
kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:703a397fb99dd1a2810707b748a39682300febfa99639618ef835668dfb6f033,PodSandboxId:6adca81c2a783bc48ffd2492bc79655c7e0215f8668392424d3dff2b5cc8ff6d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1705372431135017013,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-835787,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b27880b6b81ca11dc023b4901941ff6f,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: dfe7758e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=e8a5e20a-ad3c-4d4b-96c1-8caba4f0292b name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 02:37:41 multinode-835787 crio[706]: time="2024-01-16 02:37:41.604951135Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=37735c82-8954-4ebd-9a86-98054e40b997 name=/runtime.v1.RuntimeService/Version
	Jan 16 02:37:41 multinode-835787 crio[706]: time="2024-01-16 02:37:41.605036782Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=37735c82-8954-4ebd-9a86-98054e40b997 name=/runtime.v1.RuntimeService/Version
	Jan 16 02:37:41 multinode-835787 crio[706]: time="2024-01-16 02:37:41.606739500Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=a04f3bb4-ef61-45f2-8374-5f6559b5979a name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 02:37:41 multinode-835787 crio[706]: time="2024-01-16 02:37:41.607308909Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705372661607289919,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125543,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=a04f3bb4-ef61-45f2-8374-5f6559b5979a name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 02:37:41 multinode-835787 crio[706]: time="2024-01-16 02:37:41.608186351Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=4efafa70-790a-4a60-8180-abc2137b4418 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 02:37:41 multinode-835787 crio[706]: time="2024-01-16 02:37:41.608279202Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=4efafa70-790a-4a60-8180-abc2137b4418 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 02:37:41 multinode-835787 crio[706]: time="2024-01-16 02:37:41.608591060Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:47b3bf3ef29e3d0ab3da044e5a6da23737e4f976686a54de6ed936d10c4a3d92,PodSandboxId:a613e34381d40457409cc85037602d07a39d2f23ee501bf5b9cc49e5bee843e4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705372469369447347,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d18fde8-ca44-4257-8475-100cd8b34ef8,},Annotations:map[string]string{io.kubernetes.container.hash: f55ed3e6,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:205b1bc3281d8e24ca47c53a0a4b97bfeb95671ac9ae834aafad552e9d27b981,PodSandboxId:f9af33ea4100aaf2ab73dfe903a0ac35370329872ebcea5198005025ff5ce917,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1705372454810886143,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-f6p29,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: de7231c8-3c4b-4fe1-a720-0e2b00c3881f,},Annotations:map[string]string{io.kubernetes.container.hash: c0b7e940,io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abd96ff8efc62238cefe253553c413c61caae2dbc07f5e9085a950f816b10e69,PodSandboxId:e5f124f074ade7d414d7dc6473ac30d924975b7d6a112b835892068841776491,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1705372453936950858,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-965sn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0898f09-1a64-4beb-bfbf-de15f2e07038,},Annotations:map[string]string{io.kubernetes.container.hash: 351cd70e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:076e448a280aebd37d1ff57718a65f81f2723d05bba14a4212d6392b46bce20d,PodSandboxId:d8a10daad526ea54ff01afa0a4b1ba0f5a442c1a84c0d2c37b7d7fd5f3072c7e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1705372440673316940,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-755b9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: ee1ea8c4-abfe-4fea-9f71-32840f6790ed,},Annotations:map[string]string{io.kubernetes.container.hash: adda7158,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac8affac82ae7c9a7460e2f37b2f09f3a79c0bbe761ff5aac94ebc6e82940c9c,PodSandboxId:a613e34381d40457409cc85037602d07a39d2f23ee501bf5b9cc49e5bee843e4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1705372438393993622,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 2d18fde8-ca44-4257-8475-100cd8b34ef8,},Annotations:map[string]string{io.kubernetes.container.hash: f55ed3e6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8bfd72dc7e5a32ef3710529615905589e1dd6d1f5936b97b09b15aab8cd17230,PodSandboxId:d98284d4d0e8fc9351b1059d2b1db093ce19f44310e815159c9e4ef44fb48840,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1705372438081171023,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gbvc2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74d63696-cb46-484d-937b-8883e6f1
df06,},Annotations:map[string]string{io.kubernetes.container.hash: f2bc5e57,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d5a74aeb46c785236bc3310f611922f8b4c7dac4110996ace3c9b8bea7c63c3,PodSandboxId:8b84d5add46d1ac0b91e4438c910e39cbfae61fde6e743116c190e67bcb64af2,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1705372431856118723,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-835787,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 108085f55363e386b9f9c083ac579444,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 156d26bf,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cce034e2a06a9e239cf0d3150cccb16dc5e88d13a69eb29fa1b5108e9482bad3,PodSandboxId:b843856fba195f477b4eecb317a8763ce9d201be623f205620c00843f9f25e2b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1705372431564855118,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-835787,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 230f2dad53142209ac2ae48ed27aa7b4,},Annotations:map[string]string{io.kubernetes.container.has
h: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53d280ab48f7ec84d1ace435bbef94cfafec4166b2fec47a752b7b373d7b3b43,PodSandboxId:c2a711c4665016bb5a806666c61c8acdb9ee82b40724c4b86cf3d0454cc53bd0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1705372431084217797,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-835787,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6adb137abb6e7ac4dcf8e50e41a3773b,},Annotations:map[string]string{io.
kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:703a397fb99dd1a2810707b748a39682300febfa99639618ef835668dfb6f033,PodSandboxId:6adca81c2a783bc48ffd2492bc79655c7e0215f8668392424d3dff2b5cc8ff6d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1705372431135017013,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-835787,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b27880b6b81ca11dc023b4901941ff6f,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: dfe7758e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=4efafa70-790a-4a60-8180-abc2137b4418 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 02:37:41 multinode-835787 crio[706]: time="2024-01-16 02:37:41.661719746Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=d50f76fc-6afa-4fba-95c3-f50150aa40d4 name=/runtime.v1.RuntimeService/Version
	Jan 16 02:37:41 multinode-835787 crio[706]: time="2024-01-16 02:37:41.661804246Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=d50f76fc-6afa-4fba-95c3-f50150aa40d4 name=/runtime.v1.RuntimeService/Version
	Jan 16 02:37:41 multinode-835787 crio[706]: time="2024-01-16 02:37:41.664042282Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=b49f6fc5-71d1-4151-b9e3-73a5d36d5fca name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 02:37:41 multinode-835787 crio[706]: time="2024-01-16 02:37:41.665002950Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705372661664972666,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125543,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=b49f6fc5-71d1-4151-b9e3-73a5d36d5fca name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 02:37:41 multinode-835787 crio[706]: time="2024-01-16 02:37:41.665870123Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=b86e90a0-c07b-40f7-bb87-d8189f8ae23b name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 02:37:41 multinode-835787 crio[706]: time="2024-01-16 02:37:41.665937392Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=b86e90a0-c07b-40f7-bb87-d8189f8ae23b name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 02:37:41 multinode-835787 crio[706]: time="2024-01-16 02:37:41.666219069Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:47b3bf3ef29e3d0ab3da044e5a6da23737e4f976686a54de6ed936d10c4a3d92,PodSandboxId:a613e34381d40457409cc85037602d07a39d2f23ee501bf5b9cc49e5bee843e4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705372469369447347,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d18fde8-ca44-4257-8475-100cd8b34ef8,},Annotations:map[string]string{io.kubernetes.container.hash: f55ed3e6,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:205b1bc3281d8e24ca47c53a0a4b97bfeb95671ac9ae834aafad552e9d27b981,PodSandboxId:f9af33ea4100aaf2ab73dfe903a0ac35370329872ebcea5198005025ff5ce917,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1705372454810886143,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-f6p29,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: de7231c8-3c4b-4fe1-a720-0e2b00c3881f,},Annotations:map[string]string{io.kubernetes.container.hash: c0b7e940,io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abd96ff8efc62238cefe253553c413c61caae2dbc07f5e9085a950f816b10e69,PodSandboxId:e5f124f074ade7d414d7dc6473ac30d924975b7d6a112b835892068841776491,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1705372453936950858,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-965sn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0898f09-1a64-4beb-bfbf-de15f2e07038,},Annotations:map[string]string{io.kubernetes.container.hash: 351cd70e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:076e448a280aebd37d1ff57718a65f81f2723d05bba14a4212d6392b46bce20d,PodSandboxId:d8a10daad526ea54ff01afa0a4b1ba0f5a442c1a84c0d2c37b7d7fd5f3072c7e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1705372440673316940,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-755b9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: ee1ea8c4-abfe-4fea-9f71-32840f6790ed,},Annotations:map[string]string{io.kubernetes.container.hash: adda7158,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac8affac82ae7c9a7460e2f37b2f09f3a79c0bbe761ff5aac94ebc6e82940c9c,PodSandboxId:a613e34381d40457409cc85037602d07a39d2f23ee501bf5b9cc49e5bee843e4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1705372438393993622,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 2d18fde8-ca44-4257-8475-100cd8b34ef8,},Annotations:map[string]string{io.kubernetes.container.hash: f55ed3e6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8bfd72dc7e5a32ef3710529615905589e1dd6d1f5936b97b09b15aab8cd17230,PodSandboxId:d98284d4d0e8fc9351b1059d2b1db093ce19f44310e815159c9e4ef44fb48840,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1705372438081171023,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gbvc2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74d63696-cb46-484d-937b-8883e6f1
df06,},Annotations:map[string]string{io.kubernetes.container.hash: f2bc5e57,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d5a74aeb46c785236bc3310f611922f8b4c7dac4110996ace3c9b8bea7c63c3,PodSandboxId:8b84d5add46d1ac0b91e4438c910e39cbfae61fde6e743116c190e67bcb64af2,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1705372431856118723,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-835787,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 108085f55363e386b9f9c083ac579444,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 156d26bf,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cce034e2a06a9e239cf0d3150cccb16dc5e88d13a69eb29fa1b5108e9482bad3,PodSandboxId:b843856fba195f477b4eecb317a8763ce9d201be623f205620c00843f9f25e2b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1705372431564855118,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-835787,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 230f2dad53142209ac2ae48ed27aa7b4,},Annotations:map[string]string{io.kubernetes.container.has
h: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53d280ab48f7ec84d1ace435bbef94cfafec4166b2fec47a752b7b373d7b3b43,PodSandboxId:c2a711c4665016bb5a806666c61c8acdb9ee82b40724c4b86cf3d0454cc53bd0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1705372431084217797,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-835787,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6adb137abb6e7ac4dcf8e50e41a3773b,},Annotations:map[string]string{io.
kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:703a397fb99dd1a2810707b748a39682300febfa99639618ef835668dfb6f033,PodSandboxId:6adca81c2a783bc48ffd2492bc79655c7e0215f8668392424d3dff2b5cc8ff6d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1705372431135017013,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-835787,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b27880b6b81ca11dc023b4901941ff6f,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: dfe7758e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=b86e90a0-c07b-40f7-bb87-d8189f8ae23b name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	47b3bf3ef29e3       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Running             storage-provisioner       2                   a613e34381d40       storage-provisioner
	205b1bc3281d8       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   1                   f9af33ea4100a       busybox-5bc68d56bd-f6p29
	abd96ff8efc62       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      3 minutes ago       Running             coredns                   1                   e5f124f074ade       coredns-5dd5756b68-965sn
	076e448a280ae       c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc                                      3 minutes ago       Running             kindnet-cni               1                   d8a10daad526e       kindnet-755b9
	ac8affac82ae7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Exited              storage-provisioner       1                   a613e34381d40       storage-provisioner
	8bfd72dc7e5a3       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      3 minutes ago       Running             kube-proxy                1                   d98284d4d0e8f       kube-proxy-gbvc2
	6d5a74aeb46c7       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      3 minutes ago       Running             etcd                      1                   8b84d5add46d1       etcd-multinode-835787
	cce034e2a06a9       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      3 minutes ago       Running             kube-scheduler            1                   b843856fba195       kube-scheduler-multinode-835787
	703a397fb99dd       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      3 minutes ago       Running             kube-apiserver            1                   6adca81c2a783       kube-apiserver-multinode-835787
	53d280ab48f7e       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      3 minutes ago       Running             kube-controller-manager   1                   c2a711c466501       kube-controller-manager-multinode-835787
	
	
	==> coredns [abd96ff8efc62238cefe253553c413c61caae2dbc07f5e9085a950f816b10e69] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:32969 - 40147 "HINFO IN 7231510986367936837.5657203991061440042. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011611116s
	
	
	==> describe nodes <==
	Name:               multinode-835787
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-835787
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=880ec6d67bbc7fe8882aaa6543d5a07427c973fd
	                    minikube.k8s.io/name=multinode-835787
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_16T02_23_34_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Jan 2024 02:23:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-835787
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Jan 2024 02:37:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Jan 2024 02:34:27 +0000   Tue, 16 Jan 2024 02:23:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Jan 2024 02:34:27 +0000   Tue, 16 Jan 2024 02:23:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Jan 2024 02:34:27 +0000   Tue, 16 Jan 2024 02:23:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Jan 2024 02:34:27 +0000   Tue, 16 Jan 2024 02:34:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.50
	  Hostname:    multinode-835787
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 721446812514433291cd434ad703da0e
	  System UUID:                72144681-2514-4332-91cd-434ad703da0e
	  Boot ID:                    a05082a0-103c-4b3b-a471-9e0ed5f7f7dc
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-f6p29                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 coredns-5dd5756b68-965sn                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 etcd-multinode-835787                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 kindnet-755b9                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-apiserver-multinode-835787             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-controller-manager-multinode-835787    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-proxy-gbvc2                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-multinode-835787             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 13m                    kube-proxy       
	  Normal  Starting                 3m43s                  kube-proxy       
	  Normal  Starting                 14m                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  14m (x8 over 14m)      kubelet          Node multinode-835787 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m (x8 over 14m)      kubelet          Node multinode-835787 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m (x7 over 14m)      kubelet          Node multinode-835787 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  14m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 14m                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     14m                    kubelet          Node multinode-835787 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    14m                    kubelet          Node multinode-835787 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  14m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  14m                    kubelet          Node multinode-835787 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           13m                    node-controller  Node multinode-835787 event: Registered Node multinode-835787 in Controller
	  Normal  NodeReady                13m                    kubelet          Node multinode-835787 status is now: NodeReady
	  Normal  Starting                 3m51s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m51s (x8 over 3m51s)  kubelet          Node multinode-835787 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m51s (x8 over 3m51s)  kubelet          Node multinode-835787 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m51s (x7 over 3m51s)  kubelet          Node multinode-835787 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m51s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m32s                  node-controller  Node multinode-835787 event: Registered Node multinode-835787 in Controller
	
	
	Name:               multinode-835787-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-835787-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=880ec6d67bbc7fe8882aaa6543d5a07427c973fd
	                    minikube.k8s.io/name=multinode-835787
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_01_16T02_37_37_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Jan 2024 02:35:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-835787-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Jan 2024 02:37:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Jan 2024 02:35:56 +0000   Tue, 16 Jan 2024 02:35:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Jan 2024 02:35:56 +0000   Tue, 16 Jan 2024 02:35:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Jan 2024 02:35:56 +0000   Tue, 16 Jan 2024 02:35:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Jan 2024 02:35:56 +0000   Tue, 16 Jan 2024 02:35:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.15
	  Hostname:    multinode-835787-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 8d83e5dbc8204ad7954aeb6f0ba554db
	  System UUID:                8d83e5db-c820-4ad7-954a-eb6f0ba554db
	  Boot ID:                    3a522fc8-6938-4f5d-a9fd-3c0700e94c86
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-2kwlb    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8s
	  kube-system                 kindnet-nllfm               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-proxy-hxx8p            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From        Message
	  ----     ------                   ----                   ----        -------
	  Normal   Starting                 13m                    kube-proxy  
	  Normal   Starting                 103s                   kube-proxy  
	  Normal   NodeHasSufficientMemory  13m (x5 over 13m)      kubelet     Node multinode-835787-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m (x5 over 13m)      kubelet     Node multinode-835787-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m (x5 over 13m)      kubelet     Node multinode-835787-m02 status is now: NodeHasSufficientPID
	  Normal   NodeReady                13m                    kubelet     Node multinode-835787-m02 status is now: NodeReady
	  Normal   NodeNotReady             2m48s                  kubelet     Node multinode-835787-m02 status is now: NodeNotReady
	  Warning  ContainerGCFailed        2m11s (x2 over 3m11s)  kubelet     rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   NodeNotSchedulable       107s                   kubelet     Node multinode-835787-m02 status is now: NodeNotSchedulable
	  Normal   Starting                 105s                   kubelet     Starting kubelet.
	  Normal   NodeHasSufficientMemory  105s (x2 over 105s)    kubelet     Node multinode-835787-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    105s (x2 over 105s)    kubelet     Node multinode-835787-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     105s (x2 over 105s)    kubelet     Node multinode-835787-m02 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  105s                   kubelet     Updated Node Allocatable limit across pods
	  Normal   NodeReady                105s                   kubelet     Node multinode-835787-m02 status is now: NodeReady
	
	
	Name:               multinode-835787-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-835787-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=880ec6d67bbc7fe8882aaa6543d5a07427c973fd
	                    minikube.k8s.io/name=multinode-835787
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_01_16T02_37_37_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Jan 2024 02:37:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:              Failed to get lease: leases.coordination.k8s.io "multinode-835787-m03" not found
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Jan 2024 02:37:37 +0000   Tue, 16 Jan 2024 02:37:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Jan 2024 02:37:37 +0000   Tue, 16 Jan 2024 02:37:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Jan 2024 02:37:37 +0000   Tue, 16 Jan 2024 02:37:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Jan 2024 02:37:37 +0000   Tue, 16 Jan 2024 02:37:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.123
	  Hostname:    multinode-835787-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 4da2bf37f0654f38977052be64bfc746
	  System UUID:                4da2bf37-f065-4f38-9770-52be64bfc746
	  Boot ID:                    b716aede-8aa1-4747-aac6-e17226fcce65
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-ccxjq    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         109s
	  kube-system                 kindnet-hrsvh               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-proxy-fpdqr            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From        Message
	  ----     ------                   ----               ----        -------
	  Normal   Starting                 11m                kube-proxy  
	  Normal   Starting                 12m                kube-proxy  
	  Normal   Starting                 5s                 kube-proxy  
	  Normal   NodeHasNoDiskPressure    12m (x5 over 12m)  kubelet     Node multinode-835787-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x5 over 12m)  kubelet     Node multinode-835787-m03 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  12m (x5 over 12m)  kubelet     Node multinode-835787-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeReady                12m                kubelet     Node multinode-835787-m03 status is now: NodeReady
	  Normal   Starting                 11m                kubelet     Starting kubelet.
	  Normal   NodeHasSufficientMemory  11m (x2 over 11m)  kubelet     Node multinode-835787-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  11m                kubelet     Updated Node Allocatable limit across pods
	  Normal   NodeHasNoDiskPressure    11m (x2 over 11m)  kubelet     Node multinode-835787-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x2 over 11m)  kubelet     Node multinode-835787-m03 status is now: NodeHasSufficientPID
	  Normal   NodeReady                11m                kubelet     Node multinode-835787-m03 status is now: NodeReady
	  Normal   NodeNotReady             73s                kubelet     Node multinode-835787-m03 status is now: NodeNotReady
	  Warning  ContainerGCFailed        39s (x2 over 99s)  kubelet     rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   Starting                 5s                 kubelet     Starting kubelet.
	  Normal   NodeHasSufficientMemory  5s                 kubelet     Node multinode-835787-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5s                 kubelet     Node multinode-835787-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5s                 kubelet     Node multinode-835787-m03 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  5s                 kubelet     Updated Node Allocatable limit across pods
	  Normal   NodeReady                5s                 kubelet     Node multinode-835787-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[Jan16 02:33] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.069214] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.419032] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.389639] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.153806] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000028] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.450278] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.652465] systemd-fstab-generator[632]: Ignoring "noauto" for root device
	[  +0.125037] systemd-fstab-generator[643]: Ignoring "noauto" for root device
	[  +0.147451] systemd-fstab-generator[656]: Ignoring "noauto" for root device
	[  +0.122842] systemd-fstab-generator[667]: Ignoring "noauto" for root device
	[  +0.229879] systemd-fstab-generator[691]: Ignoring "noauto" for root device
	[ +17.129831] systemd-fstab-generator[905]: Ignoring "noauto" for root device
	[Jan16 02:34] kauditd_printk_skb: 18 callbacks suppressed
	
	
	==> etcd [6d5a74aeb46c785236bc3310f611922f8b4c7dac4110996ace3c9b8bea7c63c3] <==
	{"level":"info","ts":"2024-01-16T02:33:53.565123Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-01-16T02:33:53.565135Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-01-16T02:33:53.568242Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-01-16T02:33:53.568438Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"eb1de673f525aa4c","initial-advertise-peer-urls":["https://192.168.39.50:2380"],"listen-peer-urls":["https://192.168.39.50:2380"],"advertise-client-urls":["https://192.168.39.50:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.50:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-01-16T02:33:53.56849Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-01-16T02:33:53.568759Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"eb1de673f525aa4c switched to configuration voters=(16941950758946187852)"}
	{"level":"info","ts":"2024-01-16T02:33:53.568852Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.50:2380"}
	{"level":"info","ts":"2024-01-16T02:33:53.568881Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.50:2380"}
	{"level":"info","ts":"2024-01-16T02:33:53.568857Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"c4909210040256fc","local-member-id":"eb1de673f525aa4c","added-peer-id":"eb1de673f525aa4c","added-peer-peer-urls":["https://192.168.39.50:2380"]}
	{"level":"info","ts":"2024-01-16T02:33:53.569044Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"c4909210040256fc","local-member-id":"eb1de673f525aa4c","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-16T02:33:53.56909Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-16T02:33:55.043306Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"eb1de673f525aa4c is starting a new election at term 2"}
	{"level":"info","ts":"2024-01-16T02:33:55.043415Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"eb1de673f525aa4c became pre-candidate at term 2"}
	{"level":"info","ts":"2024-01-16T02:33:55.043452Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"eb1de673f525aa4c received MsgPreVoteResp from eb1de673f525aa4c at term 2"}
	{"level":"info","ts":"2024-01-16T02:33:55.043482Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"eb1de673f525aa4c became candidate at term 3"}
	{"level":"info","ts":"2024-01-16T02:33:55.043506Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"eb1de673f525aa4c received MsgVoteResp from eb1de673f525aa4c at term 3"}
	{"level":"info","ts":"2024-01-16T02:33:55.043533Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"eb1de673f525aa4c became leader at term 3"}
	{"level":"info","ts":"2024-01-16T02:33:55.043562Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: eb1de673f525aa4c elected leader eb1de673f525aa4c at term 3"}
	{"level":"info","ts":"2024-01-16T02:33:55.046808Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-16T02:33:55.046757Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"eb1de673f525aa4c","local-member-attributes":"{Name:multinode-835787 ClientURLs:[https://192.168.39.50:2379]}","request-path":"/0/members/eb1de673f525aa4c/attributes","cluster-id":"c4909210040256fc","publish-timeout":"7s"}
	{"level":"info","ts":"2024-01-16T02:33:55.047937Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-16T02:33:55.048816Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-01-16T02:33:55.049081Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-01-16T02:33:55.049122Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-01-16T02:33:55.04987Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.50:2379"}
	
	
	==> kernel <==
	 02:37:42 up 4 min,  0 users,  load average: 0.16, 0.21, 0.10
	Linux multinode-835787 5.10.57 #1 SMP Thu Dec 28 22:04:21 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kindnet [076e448a280aebd37d1ff57718a65f81f2723d05bba14a4212d6392b46bce20d] <==
	I0116 02:36:52.521316       1 main.go:250] Node multinode-835787-m03 has CIDR [10.244.3.0/24] 
	I0116 02:37:02.536386       1 main.go:223] Handling node with IPs: map[192.168.39.50:{}]
	I0116 02:37:02.536540       1 main.go:227] handling current node
	I0116 02:37:02.536575       1 main.go:223] Handling node with IPs: map[192.168.39.15:{}]
	I0116 02:37:02.536602       1 main.go:250] Node multinode-835787-m02 has CIDR [10.244.1.0/24] 
	I0116 02:37:02.537003       1 main.go:223] Handling node with IPs: map[192.168.39.123:{}]
	I0116 02:37:02.537045       1 main.go:250] Node multinode-835787-m03 has CIDR [10.244.3.0/24] 
	I0116 02:37:12.551312       1 main.go:223] Handling node with IPs: map[192.168.39.50:{}]
	I0116 02:37:12.551373       1 main.go:227] handling current node
	I0116 02:37:12.551391       1 main.go:223] Handling node with IPs: map[192.168.39.15:{}]
	I0116 02:37:12.551397       1 main.go:250] Node multinode-835787-m02 has CIDR [10.244.1.0/24] 
	I0116 02:37:12.551555       1 main.go:223] Handling node with IPs: map[192.168.39.123:{}]
	I0116 02:37:12.551587       1 main.go:250] Node multinode-835787-m03 has CIDR [10.244.3.0/24] 
	I0116 02:37:22.566066       1 main.go:223] Handling node with IPs: map[192.168.39.50:{}]
	I0116 02:37:22.566118       1 main.go:227] handling current node
	I0116 02:37:22.566141       1 main.go:223] Handling node with IPs: map[192.168.39.15:{}]
	I0116 02:37:22.566147       1 main.go:250] Node multinode-835787-m02 has CIDR [10.244.1.0/24] 
	I0116 02:37:22.566254       1 main.go:223] Handling node with IPs: map[192.168.39.123:{}]
	I0116 02:37:22.566259       1 main.go:250] Node multinode-835787-m03 has CIDR [10.244.3.0/24] 
	I0116 02:37:32.573312       1 main.go:223] Handling node with IPs: map[192.168.39.50:{}]
	I0116 02:37:32.573399       1 main.go:227] handling current node
	I0116 02:37:32.573420       1 main.go:223] Handling node with IPs: map[192.168.39.15:{}]
	I0116 02:37:32.573426       1 main.go:250] Node multinode-835787-m02 has CIDR [10.244.1.0/24] 
	I0116 02:37:32.573554       1 main.go:223] Handling node with IPs: map[192.168.39.123:{}]
	I0116 02:37:32.573591       1 main.go:250] Node multinode-835787-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [703a397fb99dd1a2810707b748a39682300febfa99639618ef835668dfb6f033] <==
	I0116 02:33:56.564493       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0116 02:33:56.564514       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I0116 02:33:56.565095       1 dynamic_serving_content.go:132] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0116 02:33:56.632561       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0116 02:33:56.634780       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0116 02:33:56.670279       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0116 02:33:56.676837       1 aggregator.go:166] initial CRD sync complete...
	I0116 02:33:56.676922       1 autoregister_controller.go:141] Starting autoregister controller
	I0116 02:33:56.676948       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0116 02:33:56.676974       1 cache.go:39] Caches are synced for autoregister controller
	I0116 02:33:56.681325       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0116 02:33:56.683802       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0116 02:33:56.689418       1 shared_informer.go:318] Caches are synced for configmaps
	I0116 02:33:56.689493       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0116 02:33:56.689604       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0116 02:33:56.689700       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	E0116 02:33:56.699897       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0116 02:33:57.494060       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0116 02:33:59.350196       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0116 02:33:59.524262       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0116 02:33:59.538302       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0116 02:33:59.623396       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0116 02:33:59.637565       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0116 02:34:09.489127       1 controller.go:624] quota admission added evaluator for: endpoints
	I0116 02:34:09.739112       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [53d280ab48f7ec84d1ace435bbef94cfafec4166b2fec47a752b7b373d7b3b43] <==
	I0116 02:35:56.731485       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd-hzzdv" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5bc68d56bd-hzzdv"
	I0116 02:35:56.731584       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-835787-m02\" does not exist"
	I0116 02:35:56.758490       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-835787-m02" podCIDRs=["10.244.1.0/24"]
	I0116 02:35:56.778596       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-835787-m03"
	I0116 02:35:57.637528       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="117.689µs"
	I0116 02:36:10.894886       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="144.617µs"
	I0116 02:36:11.487810       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="95.711µs"
	I0116 02:36:11.493862       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="57.73µs"
	I0116 02:36:29.270690       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-835787-m02"
	I0116 02:37:33.605611       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-2kwlb"
	I0116 02:37:33.618788       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="29.664339ms"
	I0116 02:37:33.635254       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="16.294626ms"
	I0116 02:37:33.635706       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="146.804µs"
	I0116 02:37:33.635970       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="118.609µs"
	I0116 02:37:33.652803       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="114.617µs"
	I0116 02:37:34.750084       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="9.922398ms"
	I0116 02:37:34.750301       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="83.392µs"
	I0116 02:37:35.698800       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="67.411µs"
	I0116 02:37:36.614099       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-835787-m02"
	I0116 02:37:37.338732       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-835787-m02"
	I0116 02:37:37.338806       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-835787-m03\" does not exist"
	I0116 02:37:37.339184       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd-ccxjq" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5bc68d56bd-ccxjq"
	I0116 02:37:37.368555       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-835787-m03" podCIDRs=["10.244.2.0/24"]
	I0116 02:37:37.486478       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-835787-m02"
	I0116 02:37:38.245776       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="82.974µs"
	
	
	==> kube-proxy [8bfd72dc7e5a32ef3710529615905589e1dd6d1f5936b97b09b15aab8cd17230] <==
	I0116 02:33:58.485103       1 server_others.go:69] "Using iptables proxy"
	I0116 02:33:58.528206       1 node.go:141] Successfully retrieved node IP: 192.168.39.50
	I0116 02:33:58.729833       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0116 02:33:58.729905       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0116 02:33:58.748738       1 server_others.go:152] "Using iptables Proxier"
	I0116 02:33:58.748813       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0116 02:33:58.749112       1 server.go:846] "Version info" version="v1.28.4"
	I0116 02:33:58.749137       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0116 02:33:58.755310       1 config.go:188] "Starting service config controller"
	I0116 02:33:58.755332       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0116 02:33:58.755347       1 config.go:97] "Starting endpoint slice config controller"
	I0116 02:33:58.755351       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0116 02:33:58.755869       1 config.go:315] "Starting node config controller"
	I0116 02:33:58.755875       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0116 02:33:58.855914       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0116 02:33:58.856044       1 shared_informer.go:318] Caches are synced for node config
	I0116 02:33:58.856070       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [cce034e2a06a9e239cf0d3150cccb16dc5e88d13a69eb29fa1b5108e9482bad3] <==
	I0116 02:33:53.915705       1 serving.go:348] Generated self-signed cert in-memory
	W0116 02:33:56.615088       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0116 02:33:56.615136       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0116 02:33:56.615154       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0116 02:33:56.615160       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0116 02:33:56.644962       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0116 02:33:56.645022       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0116 02:33:56.647262       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0116 02:33:56.647394       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0116 02:33:56.647410       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0116 02:33:56.647424       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0116 02:33:56.748846       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Tue 2024-01-16 02:33:24 UTC, ends at Tue 2024-01-16 02:37:42 UTC. --
	Jan 16 02:34:00 multinode-835787 kubelet[911]: E0116 02:34:00.844454     911 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/de7231c8-3c4b-4fe1-a720-0e2b00c3881f-kube-api-access-cdpff podName:de7231c8-3c4b-4fe1-a720-0e2b00c3881f nodeName:}" failed. No retries permitted until 2024-01-16 02:34:04.844440005 +0000 UTC m=+14.996628953 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cdpff" (UniqueName: "kubernetes.io/projected/de7231c8-3c4b-4fe1-a720-0e2b00c3881f-kube-api-access-cdpff") pod "busybox-5bc68d56bd-f6p29" (UID: "de7231c8-3c4b-4fe1-a720-0e2b00c3881f") : object "default"/"kube-root-ca.crt" not registered
	Jan 16 02:34:01 multinode-835787 kubelet[911]: E0116 02:34:01.133181     911 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="default/busybox-5bc68d56bd-f6p29" podUID="de7231c8-3c4b-4fe1-a720-0e2b00c3881f"
	Jan 16 02:34:01 multinode-835787 kubelet[911]: E0116 02:34:01.133827     911 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-5dd5756b68-965sn" podUID="a0898f09-1a64-4beb-bfbf-de15f2e07038"
	Jan 16 02:34:03 multinode-835787 kubelet[911]: E0116 02:34:03.133287     911 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="default/busybox-5bc68d56bd-f6p29" podUID="de7231c8-3c4b-4fe1-a720-0e2b00c3881f"
	Jan 16 02:34:03 multinode-835787 kubelet[911]: E0116 02:34:03.133438     911 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-5dd5756b68-965sn" podUID="a0898f09-1a64-4beb-bfbf-de15f2e07038"
	Jan 16 02:34:04 multinode-835787 kubelet[911]: E0116 02:34:04.776708     911 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jan 16 02:34:04 multinode-835787 kubelet[911]: E0116 02:34:04.776830     911 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a0898f09-1a64-4beb-bfbf-de15f2e07038-config-volume podName:a0898f09-1a64-4beb-bfbf-de15f2e07038 nodeName:}" failed. No retries permitted until 2024-01-16 02:34:12.77681541 +0000 UTC m=+22.929004345 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/a0898f09-1a64-4beb-bfbf-de15f2e07038-config-volume") pod "coredns-5dd5756b68-965sn" (UID: "a0898f09-1a64-4beb-bfbf-de15f2e07038") : object "kube-system"/"coredns" not registered
	Jan 16 02:34:04 multinode-835787 kubelet[911]: E0116 02:34:04.877459     911 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	Jan 16 02:34:04 multinode-835787 kubelet[911]: E0116 02:34:04.877523     911 projected.go:198] Error preparing data for projected volume kube-api-access-cdpff for pod default/busybox-5bc68d56bd-f6p29: object "default"/"kube-root-ca.crt" not registered
	Jan 16 02:34:04 multinode-835787 kubelet[911]: E0116 02:34:04.877577     911 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/de7231c8-3c4b-4fe1-a720-0e2b00c3881f-kube-api-access-cdpff podName:de7231c8-3c4b-4fe1-a720-0e2b00c3881f nodeName:}" failed. No retries permitted until 2024-01-16 02:34:12.877563193 +0000 UTC m=+23.029752141 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cdpff" (UniqueName: "kubernetes.io/projected/de7231c8-3c4b-4fe1-a720-0e2b00c3881f-kube-api-access-cdpff") pod "busybox-5bc68d56bd-f6p29" (UID: "de7231c8-3c4b-4fe1-a720-0e2b00c3881f") : object "default"/"kube-root-ca.crt" not registered
	Jan 16 02:34:05 multinode-835787 kubelet[911]: E0116 02:34:05.133412     911 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-5dd5756b68-965sn" podUID="a0898f09-1a64-4beb-bfbf-de15f2e07038"
	Jan 16 02:34:05 multinode-835787 kubelet[911]: E0116 02:34:05.133807     911 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="default/busybox-5bc68d56bd-f6p29" podUID="de7231c8-3c4b-4fe1-a720-0e2b00c3881f"
	Jan 16 02:34:29 multinode-835787 kubelet[911]: I0116 02:34:29.341995     911 scope.go:117] "RemoveContainer" containerID="ac8affac82ae7c9a7460e2f37b2f09f3a79c0bbe761ff5aac94ebc6e82940c9c"
	Jan 16 02:34:50 multinode-835787 kubelet[911]: E0116 02:34:50.152965     911 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 16 02:34:50 multinode-835787 kubelet[911]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 16 02:34:50 multinode-835787 kubelet[911]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 16 02:34:50 multinode-835787 kubelet[911]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 16 02:35:50 multinode-835787 kubelet[911]: E0116 02:35:50.159394     911 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 16 02:35:50 multinode-835787 kubelet[911]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 16 02:35:50 multinode-835787 kubelet[911]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 16 02:35:50 multinode-835787 kubelet[911]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 16 02:36:50 multinode-835787 kubelet[911]: E0116 02:36:50.152170     911 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 16 02:36:50 multinode-835787 kubelet[911]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 16 02:36:50 multinode-835787 kubelet[911]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 16 02:36:50 multinode-835787 kubelet[911]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-835787 -n multinode-835787
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-835787 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (689.69s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (142.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:342: (dbg) Run:  out/minikube-linux-amd64 -p multinode-835787 stop
E0116 02:38:12.495443  978482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/addons-321835/client.crt: no such file or directory
multinode_test.go:342: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-835787 stop: exit status 82 (2m0.288854521s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-835787"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:344: node stop returned an error. args "out/minikube-linux-amd64 -p multinode-835787 stop": exit status 82
multinode_test.go:348: (dbg) Run:  out/minikube-linux-amd64 -p multinode-835787 status
E0116 02:39:50.170025  978482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/ingress-addon-legacy-473102/client.crt: no such file or directory
multinode_test.go:348: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-835787 status: exit status 3 (18.862568639s)

                                                
                                                
-- stdout --
	multinode-835787
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	multinode-835787-m02
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0116 02:40:04.262238  997295 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.50:22: connect: no route to host
	E0116 02:40:04.262283  997295 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.50:22: connect: no route to host

                                                
                                                
** /stderr **
multinode_test.go:351: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-835787 status" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-835787 -n multinode-835787
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p multinode-835787 -n multinode-835787: exit status 3 (3.211358559s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0116 02:40:07.654404  997401 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.50:22: connect: no route to host
	E0116 02:40:07.654425  997401 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.50:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "multinode-835787" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiNode/serial/StopMultiNode (142.36s)

                                                
                                    
x
+
TestPreload (283.33s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-542891 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E0116 02:49:50.170422  978482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/ingress-addon-legacy-473102/client.crt: no such file or directory
E0116 02:50:30.558982  978482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/functional-941139/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-542891 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (2m22.440653006s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-542891 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-542891 image pull gcr.io/k8s-minikube/busybox: (1.184599399s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-542891
E0116 02:52:27.513095  978482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/functional-941139/client.crt: no such file or directory
preload_test.go:58: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p test-preload-542891: exit status 82 (2m0.295349201s)

                                                
                                                
-- stdout --
	* Stopping node "test-preload-542891"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
preload_test.go:60: out/minikube-linux-amd64 stop -p test-preload-542891 failed: exit status 82
panic.go:523: *** TestPreload FAILED at 2024-01-16 02:52:50.950623348 +0000 UTC m=+3148.834449933
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-542891 -n test-preload-542891
E0116 02:52:53.217207  978482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/ingress-addon-legacy-473102/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-542891 -n test-preload-542891: exit status 3 (18.469152957s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0116 02:53:09.414212 1000386 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.215:22: connect: no route to host
	E0116 02:53:09.414242 1000386 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.215:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "test-preload-542891" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "test-preload-542891" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-542891
--- FAIL: TestPreload (283.33s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (138.96s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-934668 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-934668 --alsologtostderr -v=3: exit status 82 (2m0.313966855s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-934668"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0116 03:05:20.926067 1010492 out.go:296] Setting OutFile to fd 1 ...
	I0116 03:05:20.926235 1010492 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 03:05:20.926246 1010492 out.go:309] Setting ErrFile to fd 2...
	I0116 03:05:20.926253 1010492 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 03:05:20.926472 1010492 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17967-971255/.minikube/bin
	I0116 03:05:20.926760 1010492 out.go:303] Setting JSON to false
	I0116 03:05:20.926868 1010492 mustload.go:65] Loading cluster: no-preload-934668
	I0116 03:05:20.927275 1010492 config.go:182] Loaded profile config "no-preload-934668": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0116 03:05:20.927372 1010492 profile.go:148] Saving config to /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/no-preload-934668/config.json ...
	I0116 03:05:20.927543 1010492 mustload.go:65] Loading cluster: no-preload-934668
	I0116 03:05:20.927727 1010492 config.go:182] Loaded profile config "no-preload-934668": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0116 03:05:20.927769 1010492 stop.go:39] StopHost: no-preload-934668
	I0116 03:05:20.928176 1010492 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:05:20.928243 1010492 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:05:20.944073 1010492 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36231
	I0116 03:05:20.944556 1010492 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:05:20.945184 1010492 main.go:141] libmachine: Using API Version  1
	I0116 03:05:20.945212 1010492 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:05:20.945585 1010492 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:05:20.949024 1010492 out.go:177] * Stopping node "no-preload-934668"  ...
	I0116 03:05:20.950530 1010492 main.go:141] libmachine: Stopping "no-preload-934668"...
	I0116 03:05:20.950567 1010492 main.go:141] libmachine: (no-preload-934668) Calling .GetState
	I0116 03:05:20.952337 1010492 main.go:141] libmachine: (no-preload-934668) Calling .Stop
	I0116 03:05:20.956187 1010492 main.go:141] libmachine: (no-preload-934668) Waiting for machine to stop 0/120
	I0116 03:05:21.958343 1010492 main.go:141] libmachine: (no-preload-934668) Waiting for machine to stop 1/120
	I0116 03:05:22.961011 1010492 main.go:141] libmachine: (no-preload-934668) Waiting for machine to stop 2/120
	I0116 03:05:23.963220 1010492 main.go:141] libmachine: (no-preload-934668) Waiting for machine to stop 3/120
	I0116 03:05:24.964822 1010492 main.go:141] libmachine: (no-preload-934668) Waiting for machine to stop 4/120
	I0116 03:05:25.967130 1010492 main.go:141] libmachine: (no-preload-934668) Waiting for machine to stop 5/120
	I0116 03:05:26.968836 1010492 main.go:141] libmachine: (no-preload-934668) Waiting for machine to stop 6/120
	I0116 03:05:27.970195 1010492 main.go:141] libmachine: (no-preload-934668) Waiting for machine to stop 7/120
	I0116 03:05:28.972226 1010492 main.go:141] libmachine: (no-preload-934668) Waiting for machine to stop 8/120
	I0116 03:05:29.974174 1010492 main.go:141] libmachine: (no-preload-934668) Waiting for machine to stop 9/120
	I0116 03:05:30.975682 1010492 main.go:141] libmachine: (no-preload-934668) Waiting for machine to stop 10/120
	I0116 03:05:31.977372 1010492 main.go:141] libmachine: (no-preload-934668) Waiting for machine to stop 11/120
	I0116 03:05:32.978811 1010492 main.go:141] libmachine: (no-preload-934668) Waiting for machine to stop 12/120
	I0116 03:05:33.980871 1010492 main.go:141] libmachine: (no-preload-934668) Waiting for machine to stop 13/120
	I0116 03:05:34.982685 1010492 main.go:141] libmachine: (no-preload-934668) Waiting for machine to stop 14/120
	I0116 03:05:35.984449 1010492 main.go:141] libmachine: (no-preload-934668) Waiting for machine to stop 15/120
	I0116 03:05:36.985919 1010492 main.go:141] libmachine: (no-preload-934668) Waiting for machine to stop 16/120
	I0116 03:05:37.987723 1010492 main.go:141] libmachine: (no-preload-934668) Waiting for machine to stop 17/120
	I0116 03:05:38.989131 1010492 main.go:141] libmachine: (no-preload-934668) Waiting for machine to stop 18/120
	I0116 03:05:39.990833 1010492 main.go:141] libmachine: (no-preload-934668) Waiting for machine to stop 19/120
	I0116 03:05:40.993092 1010492 main.go:141] libmachine: (no-preload-934668) Waiting for machine to stop 20/120
	I0116 03:05:41.994868 1010492 main.go:141] libmachine: (no-preload-934668) Waiting for machine to stop 21/120
	I0116 03:05:42.997135 1010492 main.go:141] libmachine: (no-preload-934668) Waiting for machine to stop 22/120
	I0116 03:05:43.998576 1010492 main.go:141] libmachine: (no-preload-934668) Waiting for machine to stop 23/120
	I0116 03:05:45.000003 1010492 main.go:141] libmachine: (no-preload-934668) Waiting for machine to stop 24/120
	I0116 03:05:46.002225 1010492 main.go:141] libmachine: (no-preload-934668) Waiting for machine to stop 25/120
	I0116 03:05:47.003849 1010492 main.go:141] libmachine: (no-preload-934668) Waiting for machine to stop 26/120
	I0116 03:05:48.005238 1010492 main.go:141] libmachine: (no-preload-934668) Waiting for machine to stop 27/120
	I0116 03:05:49.007135 1010492 main.go:141] libmachine: (no-preload-934668) Waiting for machine to stop 28/120
	I0116 03:05:50.008700 1010492 main.go:141] libmachine: (no-preload-934668) Waiting for machine to stop 29/120
	I0116 03:05:51.011348 1010492 main.go:141] libmachine: (no-preload-934668) Waiting for machine to stop 30/120
	I0116 03:05:52.013466 1010492 main.go:141] libmachine: (no-preload-934668) Waiting for machine to stop 31/120
	I0116 03:05:53.014955 1010492 main.go:141] libmachine: (no-preload-934668) Waiting for machine to stop 32/120
	I0116 03:05:54.016969 1010492 main.go:141] libmachine: (no-preload-934668) Waiting for machine to stop 33/120
	I0116 03:05:55.018627 1010492 main.go:141] libmachine: (no-preload-934668) Waiting for machine to stop 34/120
	I0116 03:05:56.020840 1010492 main.go:141] libmachine: (no-preload-934668) Waiting for machine to stop 35/120
	I0116 03:05:57.022258 1010492 main.go:141] libmachine: (no-preload-934668) Waiting for machine to stop 36/120
	I0116 03:05:58.023749 1010492 main.go:141] libmachine: (no-preload-934668) Waiting for machine to stop 37/120
	I0116 03:05:59.025270 1010492 main.go:141] libmachine: (no-preload-934668) Waiting for machine to stop 38/120
	I0116 03:06:00.026796 1010492 main.go:141] libmachine: (no-preload-934668) Waiting for machine to stop 39/120
	I0116 03:06:01.029183 1010492 main.go:141] libmachine: (no-preload-934668) Waiting for machine to stop 40/120
	I0116 03:06:02.030790 1010492 main.go:141] libmachine: (no-preload-934668) Waiting for machine to stop 41/120
	I0116 03:06:03.032482 1010492 main.go:141] libmachine: (no-preload-934668) Waiting for machine to stop 42/120
	I0116 03:06:04.034279 1010492 main.go:141] libmachine: (no-preload-934668) Waiting for machine to stop 43/120
	I0116 03:06:05.036283 1010492 main.go:141] libmachine: (no-preload-934668) Waiting for machine to stop 44/120
	I0116 03:06:06.038541 1010492 main.go:141] libmachine: (no-preload-934668) Waiting for machine to stop 45/120
	I0116 03:06:07.040326 1010492 main.go:141] libmachine: (no-preload-934668) Waiting for machine to stop 46/120
	I0116 03:06:08.041722 1010492 main.go:141] libmachine: (no-preload-934668) Waiting for machine to stop 47/120
	I0116 03:06:09.043396 1010492 main.go:141] libmachine: (no-preload-934668) Waiting for machine to stop 48/120
	I0116 03:06:10.044967 1010492 main.go:141] libmachine: (no-preload-934668) Waiting for machine to stop 49/120
	I0116 03:06:11.046376 1010492 main.go:141] libmachine: (no-preload-934668) Waiting for machine to stop 50/120
	I0116 03:06:12.047758 1010492 main.go:141] libmachine: (no-preload-934668) Waiting for machine to stop 51/120
	I0116 03:06:13.049183 1010492 main.go:141] libmachine: (no-preload-934668) Waiting for machine to stop 52/120
	I0116 03:06:14.050681 1010492 main.go:141] libmachine: (no-preload-934668) Waiting for machine to stop 53/120
	I0116 03:06:15.052369 1010492 main.go:141] libmachine: (no-preload-934668) Waiting for machine to stop 54/120
	I0116 03:06:16.054644 1010492 main.go:141] libmachine: (no-preload-934668) Waiting for machine to stop 55/120
	I0116 03:06:17.056041 1010492 main.go:141] libmachine: (no-preload-934668) Waiting for machine to stop 56/120
	I0116 03:06:18.057364 1010492 main.go:141] libmachine: (no-preload-934668) Waiting for machine to stop 57/120
	I0116 03:06:19.058948 1010492 main.go:141] libmachine: (no-preload-934668) Waiting for machine to stop 58/120
	I0116 03:06:20.060375 1010492 main.go:141] libmachine: (no-preload-934668) Waiting for machine to stop 59/120
	I0116 03:06:21.062739 1010492 main.go:141] libmachine: (no-preload-934668) Waiting for machine to stop 60/120
	I0116 03:06:22.064212 1010492 main.go:141] libmachine: (no-preload-934668) Waiting for machine to stop 61/120
	I0116 03:06:23.065615 1010492 main.go:141] libmachine: (no-preload-934668) Waiting for machine to stop 62/120
	I0116 03:06:24.067330 1010492 main.go:141] libmachine: (no-preload-934668) Waiting for machine to stop 63/120
	I0116 03:06:25.068801 1010492 main.go:141] libmachine: (no-preload-934668) Waiting for machine to stop 64/120
	I0116 03:06:26.070704 1010492 main.go:141] libmachine: (no-preload-934668) Waiting for machine to stop 65/120
	I0116 03:06:27.072131 1010492 main.go:141] libmachine: (no-preload-934668) Waiting for machine to stop 66/120
	I0116 03:06:28.073523 1010492 main.go:141] libmachine: (no-preload-934668) Waiting for machine to stop 67/120
	I0116 03:06:29.074874 1010492 main.go:141] libmachine: (no-preload-934668) Waiting for machine to stop 68/120
	I0116 03:06:30.076358 1010492 main.go:141] libmachine: (no-preload-934668) Waiting for machine to stop 69/120
	I0116 03:06:31.078653 1010492 main.go:141] libmachine: (no-preload-934668) Waiting for machine to stop 70/120
	I0116 03:06:32.079962 1010492 main.go:141] libmachine: (no-preload-934668) Waiting for machine to stop 71/120
	I0116 03:06:33.081418 1010492 main.go:141] libmachine: (no-preload-934668) Waiting for machine to stop 72/120
	I0116 03:06:34.082855 1010492 main.go:141] libmachine: (no-preload-934668) Waiting for machine to stop 73/120
	I0116 03:06:35.084310 1010492 main.go:141] libmachine: (no-preload-934668) Waiting for machine to stop 74/120
	I0116 03:06:36.086683 1010492 main.go:141] libmachine: (no-preload-934668) Waiting for machine to stop 75/120
	I0116 03:06:37.088216 1010492 main.go:141] libmachine: (no-preload-934668) Waiting for machine to stop 76/120
	I0116 03:06:38.089603 1010492 main.go:141] libmachine: (no-preload-934668) Waiting for machine to stop 77/120
	I0116 03:06:39.091205 1010492 main.go:141] libmachine: (no-preload-934668) Waiting for machine to stop 78/120
	I0116 03:06:40.092546 1010492 main.go:141] libmachine: (no-preload-934668) Waiting for machine to stop 79/120
	I0116 03:06:41.094058 1010492 main.go:141] libmachine: (no-preload-934668) Waiting for machine to stop 80/120
	I0116 03:06:42.095862 1010492 main.go:141] libmachine: (no-preload-934668) Waiting for machine to stop 81/120
	I0116 03:06:43.097257 1010492 main.go:141] libmachine: (no-preload-934668) Waiting for machine to stop 82/120
	I0116 03:06:44.098884 1010492 main.go:141] libmachine: (no-preload-934668) Waiting for machine to stop 83/120
	I0116 03:06:45.100195 1010492 main.go:141] libmachine: (no-preload-934668) Waiting for machine to stop 84/120
	I0116 03:06:46.102434 1010492 main.go:141] libmachine: (no-preload-934668) Waiting for machine to stop 85/120
	I0116 03:06:47.103979 1010492 main.go:141] libmachine: (no-preload-934668) Waiting for machine to stop 86/120
	I0116 03:06:48.105426 1010492 main.go:141] libmachine: (no-preload-934668) Waiting for machine to stop 87/120
	I0116 03:06:49.106996 1010492 main.go:141] libmachine: (no-preload-934668) Waiting for machine to stop 88/120
	I0116 03:06:50.108467 1010492 main.go:141] libmachine: (no-preload-934668) Waiting for machine to stop 89/120
	I0116 03:06:51.109903 1010492 main.go:141] libmachine: (no-preload-934668) Waiting for machine to stop 90/120
	I0116 03:06:52.111358 1010492 main.go:141] libmachine: (no-preload-934668) Waiting for machine to stop 91/120
	I0116 03:06:53.112893 1010492 main.go:141] libmachine: (no-preload-934668) Waiting for machine to stop 92/120
	I0116 03:06:54.114496 1010492 main.go:141] libmachine: (no-preload-934668) Waiting for machine to stop 93/120
	I0116 03:06:55.116022 1010492 main.go:141] libmachine: (no-preload-934668) Waiting for machine to stop 94/120
	I0116 03:06:56.118120 1010492 main.go:141] libmachine: (no-preload-934668) Waiting for machine to stop 95/120
	I0116 03:06:57.119574 1010492 main.go:141] libmachine: (no-preload-934668) Waiting for machine to stop 96/120
	I0116 03:06:58.121002 1010492 main.go:141] libmachine: (no-preload-934668) Waiting for machine to stop 97/120
	I0116 03:06:59.122553 1010492 main.go:141] libmachine: (no-preload-934668) Waiting for machine to stop 98/120
	I0116 03:07:00.124334 1010492 main.go:141] libmachine: (no-preload-934668) Waiting for machine to stop 99/120
	I0116 03:07:01.126530 1010492 main.go:141] libmachine: (no-preload-934668) Waiting for machine to stop 100/120
	I0116 03:07:02.128135 1010492 main.go:141] libmachine: (no-preload-934668) Waiting for machine to stop 101/120
	I0116 03:07:03.129558 1010492 main.go:141] libmachine: (no-preload-934668) Waiting for machine to stop 102/120
	I0116 03:07:04.131193 1010492 main.go:141] libmachine: (no-preload-934668) Waiting for machine to stop 103/120
	I0116 03:07:05.132528 1010492 main.go:141] libmachine: (no-preload-934668) Waiting for machine to stop 104/120
	I0116 03:07:06.134572 1010492 main.go:141] libmachine: (no-preload-934668) Waiting for machine to stop 105/120
	I0116 03:07:07.136196 1010492 main.go:141] libmachine: (no-preload-934668) Waiting for machine to stop 106/120
	I0116 03:07:08.137586 1010492 main.go:141] libmachine: (no-preload-934668) Waiting for machine to stop 107/120
	I0116 03:07:09.139276 1010492 main.go:141] libmachine: (no-preload-934668) Waiting for machine to stop 108/120
	I0116 03:07:10.140710 1010492 main.go:141] libmachine: (no-preload-934668) Waiting for machine to stop 109/120
	I0116 03:07:11.143250 1010492 main.go:141] libmachine: (no-preload-934668) Waiting for machine to stop 110/120
	I0116 03:07:12.144484 1010492 main.go:141] libmachine: (no-preload-934668) Waiting for machine to stop 111/120
	I0116 03:07:13.146010 1010492 main.go:141] libmachine: (no-preload-934668) Waiting for machine to stop 112/120
	I0116 03:07:14.147519 1010492 main.go:141] libmachine: (no-preload-934668) Waiting for machine to stop 113/120
	I0116 03:07:15.149291 1010492 main.go:141] libmachine: (no-preload-934668) Waiting for machine to stop 114/120
	I0116 03:07:16.151557 1010492 main.go:141] libmachine: (no-preload-934668) Waiting for machine to stop 115/120
	I0116 03:07:17.152975 1010492 main.go:141] libmachine: (no-preload-934668) Waiting for machine to stop 116/120
	I0116 03:07:18.154688 1010492 main.go:141] libmachine: (no-preload-934668) Waiting for machine to stop 117/120
	I0116 03:07:19.156362 1010492 main.go:141] libmachine: (no-preload-934668) Waiting for machine to stop 118/120
	I0116 03:07:20.157783 1010492 main.go:141] libmachine: (no-preload-934668) Waiting for machine to stop 119/120
	I0116 03:07:21.159161 1010492 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0116 03:07:21.159245 1010492 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0116 03:07:21.161688 1010492 out.go:177] 
	W0116 03:07:21.163465 1010492 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0116 03:07:21.163490 1010492 out.go:239] * 
	* 
	W0116 03:07:21.167033 1010492 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0116 03:07:21.169349 1010492 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-934668 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-934668 -n no-preload-934668
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-934668 -n no-preload-934668: exit status 3 (18.643738352s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0116 03:07:39.814247 1011116 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.29:22: connect: no route to host
	E0116 03:07:39.814274 1011116 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.29:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-934668" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (138.96s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (138.82s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-480663 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p embed-certs-480663 --alsologtostderr -v=3: exit status 82 (2m0.312838102s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-480663"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0116 03:05:22.598054 1010557 out.go:296] Setting OutFile to fd 1 ...
	I0116 03:05:22.598380 1010557 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 03:05:22.598392 1010557 out.go:309] Setting ErrFile to fd 2...
	I0116 03:05:22.598397 1010557 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 03:05:22.598580 1010557 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17967-971255/.minikube/bin
	I0116 03:05:22.598822 1010557 out.go:303] Setting JSON to false
	I0116 03:05:22.598952 1010557 mustload.go:65] Loading cluster: embed-certs-480663
	I0116 03:05:22.599278 1010557 config.go:182] Loaded profile config "embed-certs-480663": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 03:05:22.599350 1010557 profile.go:148] Saving config to /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/embed-certs-480663/config.json ...
	I0116 03:05:22.599521 1010557 mustload.go:65] Loading cluster: embed-certs-480663
	I0116 03:05:22.599639 1010557 config.go:182] Loaded profile config "embed-certs-480663": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 03:05:22.599667 1010557 stop.go:39] StopHost: embed-certs-480663
	I0116 03:05:22.600131 1010557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:05:22.600184 1010557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:05:22.616284 1010557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36553
	I0116 03:05:22.616748 1010557 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:05:22.617411 1010557 main.go:141] libmachine: Using API Version  1
	I0116 03:05:22.617441 1010557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:05:22.617845 1010557 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:05:22.620498 1010557 out.go:177] * Stopping node "embed-certs-480663"  ...
	I0116 03:05:22.621948 1010557 main.go:141] libmachine: Stopping "embed-certs-480663"...
	I0116 03:05:22.621977 1010557 main.go:141] libmachine: (embed-certs-480663) Calling .GetState
	I0116 03:05:22.624078 1010557 main.go:141] libmachine: (embed-certs-480663) Calling .Stop
	I0116 03:05:22.627985 1010557 main.go:141] libmachine: (embed-certs-480663) Waiting for machine to stop 0/120
	I0116 03:05:23.629345 1010557 main.go:141] libmachine: (embed-certs-480663) Waiting for machine to stop 1/120
	I0116 03:05:24.631035 1010557 main.go:141] libmachine: (embed-certs-480663) Waiting for machine to stop 2/120
	I0116 03:05:25.633188 1010557 main.go:141] libmachine: (embed-certs-480663) Waiting for machine to stop 3/120
	I0116 03:05:26.635006 1010557 main.go:141] libmachine: (embed-certs-480663) Waiting for machine to stop 4/120
	I0116 03:05:27.637309 1010557 main.go:141] libmachine: (embed-certs-480663) Waiting for machine to stop 5/120
	I0116 03:05:28.638905 1010557 main.go:141] libmachine: (embed-certs-480663) Waiting for machine to stop 6/120
	I0116 03:05:29.640240 1010557 main.go:141] libmachine: (embed-certs-480663) Waiting for machine to stop 7/120
	I0116 03:05:30.641696 1010557 main.go:141] libmachine: (embed-certs-480663) Waiting for machine to stop 8/120
	I0116 03:05:31.643282 1010557 main.go:141] libmachine: (embed-certs-480663) Waiting for machine to stop 9/120
	I0116 03:05:32.645202 1010557 main.go:141] libmachine: (embed-certs-480663) Waiting for machine to stop 10/120
	I0116 03:05:33.646720 1010557 main.go:141] libmachine: (embed-certs-480663) Waiting for machine to stop 11/120
	I0116 03:05:34.648340 1010557 main.go:141] libmachine: (embed-certs-480663) Waiting for machine to stop 12/120
	I0116 03:05:35.649932 1010557 main.go:141] libmachine: (embed-certs-480663) Waiting for machine to stop 13/120
	I0116 03:05:36.651330 1010557 main.go:141] libmachine: (embed-certs-480663) Waiting for machine to stop 14/120
	I0116 03:05:37.653340 1010557 main.go:141] libmachine: (embed-certs-480663) Waiting for machine to stop 15/120
	I0116 03:05:38.655179 1010557 main.go:141] libmachine: (embed-certs-480663) Waiting for machine to stop 16/120
	I0116 03:05:39.657296 1010557 main.go:141] libmachine: (embed-certs-480663) Waiting for machine to stop 17/120
	I0116 03:05:40.658992 1010557 main.go:141] libmachine: (embed-certs-480663) Waiting for machine to stop 18/120
	I0116 03:05:41.660858 1010557 main.go:141] libmachine: (embed-certs-480663) Waiting for machine to stop 19/120
	I0116 03:05:42.662575 1010557 main.go:141] libmachine: (embed-certs-480663) Waiting for machine to stop 20/120
	I0116 03:05:43.664169 1010557 main.go:141] libmachine: (embed-certs-480663) Waiting for machine to stop 21/120
	I0116 03:05:44.665708 1010557 main.go:141] libmachine: (embed-certs-480663) Waiting for machine to stop 22/120
	I0116 03:05:45.667385 1010557 main.go:141] libmachine: (embed-certs-480663) Waiting for machine to stop 23/120
	I0116 03:05:46.668919 1010557 main.go:141] libmachine: (embed-certs-480663) Waiting for machine to stop 24/120
	I0116 03:05:47.671169 1010557 main.go:141] libmachine: (embed-certs-480663) Waiting for machine to stop 25/120
	I0116 03:05:48.672686 1010557 main.go:141] libmachine: (embed-certs-480663) Waiting for machine to stop 26/120
	I0116 03:05:49.675015 1010557 main.go:141] libmachine: (embed-certs-480663) Waiting for machine to stop 27/120
	I0116 03:05:50.676556 1010557 main.go:141] libmachine: (embed-certs-480663) Waiting for machine to stop 28/120
	I0116 03:05:51.678225 1010557 main.go:141] libmachine: (embed-certs-480663) Waiting for machine to stop 29/120
	I0116 03:05:52.680594 1010557 main.go:141] libmachine: (embed-certs-480663) Waiting for machine to stop 30/120
	I0116 03:05:53.682098 1010557 main.go:141] libmachine: (embed-certs-480663) Waiting for machine to stop 31/120
	I0116 03:05:54.683521 1010557 main.go:141] libmachine: (embed-certs-480663) Waiting for machine to stop 32/120
	I0116 03:05:55.685126 1010557 main.go:141] libmachine: (embed-certs-480663) Waiting for machine to stop 33/120
	I0116 03:05:56.687520 1010557 main.go:141] libmachine: (embed-certs-480663) Waiting for machine to stop 34/120
	I0116 03:05:57.689840 1010557 main.go:141] libmachine: (embed-certs-480663) Waiting for machine to stop 35/120
	I0116 03:05:58.691556 1010557 main.go:141] libmachine: (embed-certs-480663) Waiting for machine to stop 36/120
	I0116 03:05:59.693001 1010557 main.go:141] libmachine: (embed-certs-480663) Waiting for machine to stop 37/120
	I0116 03:06:00.694680 1010557 main.go:141] libmachine: (embed-certs-480663) Waiting for machine to stop 38/120
	I0116 03:06:01.696353 1010557 main.go:141] libmachine: (embed-certs-480663) Waiting for machine to stop 39/120
	I0116 03:06:02.698683 1010557 main.go:141] libmachine: (embed-certs-480663) Waiting for machine to stop 40/120
	I0116 03:06:03.700013 1010557 main.go:141] libmachine: (embed-certs-480663) Waiting for machine to stop 41/120
	I0116 03:06:04.702002 1010557 main.go:141] libmachine: (embed-certs-480663) Waiting for machine to stop 42/120
	I0116 03:06:05.704401 1010557 main.go:141] libmachine: (embed-certs-480663) Waiting for machine to stop 43/120
	I0116 03:06:06.705891 1010557 main.go:141] libmachine: (embed-certs-480663) Waiting for machine to stop 44/120
	I0116 03:06:07.708097 1010557 main.go:141] libmachine: (embed-certs-480663) Waiting for machine to stop 45/120
	I0116 03:06:08.709591 1010557 main.go:141] libmachine: (embed-certs-480663) Waiting for machine to stop 46/120
	I0116 03:06:09.711517 1010557 main.go:141] libmachine: (embed-certs-480663) Waiting for machine to stop 47/120
	I0116 03:06:10.713168 1010557 main.go:141] libmachine: (embed-certs-480663) Waiting for machine to stop 48/120
	I0116 03:06:11.714840 1010557 main.go:141] libmachine: (embed-certs-480663) Waiting for machine to stop 49/120
	I0116 03:06:12.717262 1010557 main.go:141] libmachine: (embed-certs-480663) Waiting for machine to stop 50/120
	I0116 03:06:13.718849 1010557 main.go:141] libmachine: (embed-certs-480663) Waiting for machine to stop 51/120
	I0116 03:06:14.720491 1010557 main.go:141] libmachine: (embed-certs-480663) Waiting for machine to stop 52/120
	I0116 03:06:15.722358 1010557 main.go:141] libmachine: (embed-certs-480663) Waiting for machine to stop 53/120
	I0116 03:06:16.724460 1010557 main.go:141] libmachine: (embed-certs-480663) Waiting for machine to stop 54/120
	I0116 03:06:17.726349 1010557 main.go:141] libmachine: (embed-certs-480663) Waiting for machine to stop 55/120
	I0116 03:06:18.727979 1010557 main.go:141] libmachine: (embed-certs-480663) Waiting for machine to stop 56/120
	I0116 03:06:19.729449 1010557 main.go:141] libmachine: (embed-certs-480663) Waiting for machine to stop 57/120
	I0116 03:06:20.731025 1010557 main.go:141] libmachine: (embed-certs-480663) Waiting for machine to stop 58/120
	I0116 03:06:21.733614 1010557 main.go:141] libmachine: (embed-certs-480663) Waiting for machine to stop 59/120
	I0116 03:06:22.735931 1010557 main.go:141] libmachine: (embed-certs-480663) Waiting for machine to stop 60/120
	I0116 03:06:23.737885 1010557 main.go:141] libmachine: (embed-certs-480663) Waiting for machine to stop 61/120
	I0116 03:06:24.739508 1010557 main.go:141] libmachine: (embed-certs-480663) Waiting for machine to stop 62/120
	I0116 03:06:25.741325 1010557 main.go:141] libmachine: (embed-certs-480663) Waiting for machine to stop 63/120
	I0116 03:06:26.742752 1010557 main.go:141] libmachine: (embed-certs-480663) Waiting for machine to stop 64/120
	I0116 03:06:27.744984 1010557 main.go:141] libmachine: (embed-certs-480663) Waiting for machine to stop 65/120
	I0116 03:06:28.746613 1010557 main.go:141] libmachine: (embed-certs-480663) Waiting for machine to stop 66/120
	I0116 03:06:29.748015 1010557 main.go:141] libmachine: (embed-certs-480663) Waiting for machine to stop 67/120
	I0116 03:06:30.749574 1010557 main.go:141] libmachine: (embed-certs-480663) Waiting for machine to stop 68/120
	I0116 03:06:31.751039 1010557 main.go:141] libmachine: (embed-certs-480663) Waiting for machine to stop 69/120
	I0116 03:06:32.752407 1010557 main.go:141] libmachine: (embed-certs-480663) Waiting for machine to stop 70/120
	I0116 03:06:33.754036 1010557 main.go:141] libmachine: (embed-certs-480663) Waiting for machine to stop 71/120
	I0116 03:06:34.755539 1010557 main.go:141] libmachine: (embed-certs-480663) Waiting for machine to stop 72/120
	I0116 03:06:35.757081 1010557 main.go:141] libmachine: (embed-certs-480663) Waiting for machine to stop 73/120
	I0116 03:06:36.758634 1010557 main.go:141] libmachine: (embed-certs-480663) Waiting for machine to stop 74/120
	I0116 03:06:37.761034 1010557 main.go:141] libmachine: (embed-certs-480663) Waiting for machine to stop 75/120
	I0116 03:06:38.762883 1010557 main.go:141] libmachine: (embed-certs-480663) Waiting for machine to stop 76/120
	I0116 03:06:39.764596 1010557 main.go:141] libmachine: (embed-certs-480663) Waiting for machine to stop 77/120
	I0116 03:06:40.766257 1010557 main.go:141] libmachine: (embed-certs-480663) Waiting for machine to stop 78/120
	I0116 03:06:41.767563 1010557 main.go:141] libmachine: (embed-certs-480663) Waiting for machine to stop 79/120
	I0116 03:06:42.769249 1010557 main.go:141] libmachine: (embed-certs-480663) Waiting for machine to stop 80/120
	I0116 03:06:43.770854 1010557 main.go:141] libmachine: (embed-certs-480663) Waiting for machine to stop 81/120
	I0116 03:06:44.772766 1010557 main.go:141] libmachine: (embed-certs-480663) Waiting for machine to stop 82/120
	I0116 03:06:45.774184 1010557 main.go:141] libmachine: (embed-certs-480663) Waiting for machine to stop 83/120
	I0116 03:06:46.775701 1010557 main.go:141] libmachine: (embed-certs-480663) Waiting for machine to stop 84/120
	I0116 03:06:47.777939 1010557 main.go:141] libmachine: (embed-certs-480663) Waiting for machine to stop 85/120
	I0116 03:06:48.779604 1010557 main.go:141] libmachine: (embed-certs-480663) Waiting for machine to stop 86/120
	I0116 03:06:49.781423 1010557 main.go:141] libmachine: (embed-certs-480663) Waiting for machine to stop 87/120
	I0116 03:06:50.783035 1010557 main.go:141] libmachine: (embed-certs-480663) Waiting for machine to stop 88/120
	I0116 03:06:51.784700 1010557 main.go:141] libmachine: (embed-certs-480663) Waiting for machine to stop 89/120
	I0116 03:06:52.787025 1010557 main.go:141] libmachine: (embed-certs-480663) Waiting for machine to stop 90/120
	I0116 03:06:53.788575 1010557 main.go:141] libmachine: (embed-certs-480663) Waiting for machine to stop 91/120
	I0116 03:06:54.790252 1010557 main.go:141] libmachine: (embed-certs-480663) Waiting for machine to stop 92/120
	I0116 03:06:55.791846 1010557 main.go:141] libmachine: (embed-certs-480663) Waiting for machine to stop 93/120
	I0116 03:06:56.793493 1010557 main.go:141] libmachine: (embed-certs-480663) Waiting for machine to stop 94/120
	I0116 03:06:57.795615 1010557 main.go:141] libmachine: (embed-certs-480663) Waiting for machine to stop 95/120
	I0116 03:06:58.796948 1010557 main.go:141] libmachine: (embed-certs-480663) Waiting for machine to stop 96/120
	I0116 03:06:59.798405 1010557 main.go:141] libmachine: (embed-certs-480663) Waiting for machine to stop 97/120
	I0116 03:07:00.800203 1010557 main.go:141] libmachine: (embed-certs-480663) Waiting for machine to stop 98/120
	I0116 03:07:01.801996 1010557 main.go:141] libmachine: (embed-certs-480663) Waiting for machine to stop 99/120
	I0116 03:07:02.804472 1010557 main.go:141] libmachine: (embed-certs-480663) Waiting for machine to stop 100/120
	I0116 03:07:03.806545 1010557 main.go:141] libmachine: (embed-certs-480663) Waiting for machine to stop 101/120
	I0116 03:07:04.808152 1010557 main.go:141] libmachine: (embed-certs-480663) Waiting for machine to stop 102/120
	I0116 03:07:05.809890 1010557 main.go:141] libmachine: (embed-certs-480663) Waiting for machine to stop 103/120
	I0116 03:07:06.811268 1010557 main.go:141] libmachine: (embed-certs-480663) Waiting for machine to stop 104/120
	I0116 03:07:07.813419 1010557 main.go:141] libmachine: (embed-certs-480663) Waiting for machine to stop 105/120
	I0116 03:07:08.815014 1010557 main.go:141] libmachine: (embed-certs-480663) Waiting for machine to stop 106/120
	I0116 03:07:09.816562 1010557 main.go:141] libmachine: (embed-certs-480663) Waiting for machine to stop 107/120
	I0116 03:07:10.818139 1010557 main.go:141] libmachine: (embed-certs-480663) Waiting for machine to stop 108/120
	I0116 03:07:11.819813 1010557 main.go:141] libmachine: (embed-certs-480663) Waiting for machine to stop 109/120
	I0116 03:07:12.821785 1010557 main.go:141] libmachine: (embed-certs-480663) Waiting for machine to stop 110/120
	I0116 03:07:13.823205 1010557 main.go:141] libmachine: (embed-certs-480663) Waiting for machine to stop 111/120
	I0116 03:07:14.824768 1010557 main.go:141] libmachine: (embed-certs-480663) Waiting for machine to stop 112/120
	I0116 03:07:15.826374 1010557 main.go:141] libmachine: (embed-certs-480663) Waiting for machine to stop 113/120
	I0116 03:07:16.827983 1010557 main.go:141] libmachine: (embed-certs-480663) Waiting for machine to stop 114/120
	I0116 03:07:17.830493 1010557 main.go:141] libmachine: (embed-certs-480663) Waiting for machine to stop 115/120
	I0116 03:07:18.832484 1010557 main.go:141] libmachine: (embed-certs-480663) Waiting for machine to stop 116/120
	I0116 03:07:19.834222 1010557 main.go:141] libmachine: (embed-certs-480663) Waiting for machine to stop 117/120
	I0116 03:07:20.835604 1010557 main.go:141] libmachine: (embed-certs-480663) Waiting for machine to stop 118/120
	I0116 03:07:21.837098 1010557 main.go:141] libmachine: (embed-certs-480663) Waiting for machine to stop 119/120
	I0116 03:07:22.837704 1010557 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0116 03:07:22.837764 1010557 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0116 03:07:22.840050 1010557 out.go:177] 
	W0116 03:07:22.841540 1010557 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0116 03:07:22.841551 1010557 out.go:239] * 
	* 
	W0116 03:07:22.844949 1010557 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0116 03:07:22.846347 1010557 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p embed-certs-480663 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-480663 -n embed-certs-480663
E0116 03:07:27.513346  978482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/functional-941139/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-480663 -n embed-certs-480663: exit status 3 (18.502263188s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0116 03:07:41.350186 1011146 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.150:22: connect: no route to host
	E0116 03:07:41.350219 1011146 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.150:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-480663" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (138.82s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (139.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-788237 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p old-k8s-version-788237 --alsologtostderr -v=3: exit status 82 (2m0.318977801s)

                                                
                                                
-- stdout --
	* Stopping node "old-k8s-version-788237"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0116 03:05:48.782321 1010757 out.go:296] Setting OutFile to fd 1 ...
	I0116 03:05:48.782599 1010757 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 03:05:48.782609 1010757 out.go:309] Setting ErrFile to fd 2...
	I0116 03:05:48.782614 1010757 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 03:05:48.782825 1010757 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17967-971255/.minikube/bin
	I0116 03:05:48.783090 1010757 out.go:303] Setting JSON to false
	I0116 03:05:48.783193 1010757 mustload.go:65] Loading cluster: old-k8s-version-788237
	I0116 03:05:48.783551 1010757 config.go:182] Loaded profile config "old-k8s-version-788237": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0116 03:05:48.783622 1010757 profile.go:148] Saving config to /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/old-k8s-version-788237/config.json ...
	I0116 03:05:48.783801 1010757 mustload.go:65] Loading cluster: old-k8s-version-788237
	I0116 03:05:48.783912 1010757 config.go:182] Loaded profile config "old-k8s-version-788237": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0116 03:05:48.783946 1010757 stop.go:39] StopHost: old-k8s-version-788237
	I0116 03:05:48.784360 1010757 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:05:48.784432 1010757 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:05:48.799585 1010757 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36435
	I0116 03:05:48.800147 1010757 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:05:48.800864 1010757 main.go:141] libmachine: Using API Version  1
	I0116 03:05:48.800892 1010757 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:05:48.801282 1010757 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:05:48.804023 1010757 out.go:177] * Stopping node "old-k8s-version-788237"  ...
	I0116 03:05:48.805952 1010757 main.go:141] libmachine: Stopping "old-k8s-version-788237"...
	I0116 03:05:48.805968 1010757 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetState
	I0116 03:05:48.808150 1010757 main.go:141] libmachine: (old-k8s-version-788237) Calling .Stop
	I0116 03:05:48.811739 1010757 main.go:141] libmachine: (old-k8s-version-788237) Waiting for machine to stop 0/120
	I0116 03:05:49.813393 1010757 main.go:141] libmachine: (old-k8s-version-788237) Waiting for machine to stop 1/120
	I0116 03:05:50.814935 1010757 main.go:141] libmachine: (old-k8s-version-788237) Waiting for machine to stop 2/120
	I0116 03:05:51.816341 1010757 main.go:141] libmachine: (old-k8s-version-788237) Waiting for machine to stop 3/120
	I0116 03:05:52.818246 1010757 main.go:141] libmachine: (old-k8s-version-788237) Waiting for machine to stop 4/120
	I0116 03:05:53.819933 1010757 main.go:141] libmachine: (old-k8s-version-788237) Waiting for machine to stop 5/120
	I0116 03:05:54.821410 1010757 main.go:141] libmachine: (old-k8s-version-788237) Waiting for machine to stop 6/120
	I0116 03:05:55.822971 1010757 main.go:141] libmachine: (old-k8s-version-788237) Waiting for machine to stop 7/120
	I0116 03:05:56.824490 1010757 main.go:141] libmachine: (old-k8s-version-788237) Waiting for machine to stop 8/120
	I0116 03:05:57.826053 1010757 main.go:141] libmachine: (old-k8s-version-788237) Waiting for machine to stop 9/120
	I0116 03:05:58.827571 1010757 main.go:141] libmachine: (old-k8s-version-788237) Waiting for machine to stop 10/120
	I0116 03:05:59.829994 1010757 main.go:141] libmachine: (old-k8s-version-788237) Waiting for machine to stop 11/120
	I0116 03:06:00.831335 1010757 main.go:141] libmachine: (old-k8s-version-788237) Waiting for machine to stop 12/120
	I0116 03:06:01.832834 1010757 main.go:141] libmachine: (old-k8s-version-788237) Waiting for machine to stop 13/120
	I0116 03:06:02.834414 1010757 main.go:141] libmachine: (old-k8s-version-788237) Waiting for machine to stop 14/120
	I0116 03:06:03.836283 1010757 main.go:141] libmachine: (old-k8s-version-788237) Waiting for machine to stop 15/120
	I0116 03:06:04.837842 1010757 main.go:141] libmachine: (old-k8s-version-788237) Waiting for machine to stop 16/120
	I0116 03:06:05.839830 1010757 main.go:141] libmachine: (old-k8s-version-788237) Waiting for machine to stop 17/120
	I0116 03:06:06.841282 1010757 main.go:141] libmachine: (old-k8s-version-788237) Waiting for machine to stop 18/120
	I0116 03:06:07.842852 1010757 main.go:141] libmachine: (old-k8s-version-788237) Waiting for machine to stop 19/120
	I0116 03:06:08.845265 1010757 main.go:141] libmachine: (old-k8s-version-788237) Waiting for machine to stop 20/120
	I0116 03:06:09.846918 1010757 main.go:141] libmachine: (old-k8s-version-788237) Waiting for machine to stop 21/120
	I0116 03:06:10.848502 1010757 main.go:141] libmachine: (old-k8s-version-788237) Waiting for machine to stop 22/120
	I0116 03:06:11.851065 1010757 main.go:141] libmachine: (old-k8s-version-788237) Waiting for machine to stop 23/120
	I0116 03:06:12.852763 1010757 main.go:141] libmachine: (old-k8s-version-788237) Waiting for machine to stop 24/120
	I0116 03:06:13.855309 1010757 main.go:141] libmachine: (old-k8s-version-788237) Waiting for machine to stop 25/120
	I0116 03:06:14.856892 1010757 main.go:141] libmachine: (old-k8s-version-788237) Waiting for machine to stop 26/120
	I0116 03:06:15.858426 1010757 main.go:141] libmachine: (old-k8s-version-788237) Waiting for machine to stop 27/120
	I0116 03:06:16.859702 1010757 main.go:141] libmachine: (old-k8s-version-788237) Waiting for machine to stop 28/120
	I0116 03:06:17.861641 1010757 main.go:141] libmachine: (old-k8s-version-788237) Waiting for machine to stop 29/120
	I0116 03:06:18.863107 1010757 main.go:141] libmachine: (old-k8s-version-788237) Waiting for machine to stop 30/120
	I0116 03:06:19.864584 1010757 main.go:141] libmachine: (old-k8s-version-788237) Waiting for machine to stop 31/120
	I0116 03:06:20.866200 1010757 main.go:141] libmachine: (old-k8s-version-788237) Waiting for machine to stop 32/120
	I0116 03:06:21.867743 1010757 main.go:141] libmachine: (old-k8s-version-788237) Waiting for machine to stop 33/120
	I0116 03:06:22.869138 1010757 main.go:141] libmachine: (old-k8s-version-788237) Waiting for machine to stop 34/120
	I0116 03:06:23.871362 1010757 main.go:141] libmachine: (old-k8s-version-788237) Waiting for machine to stop 35/120
	I0116 03:06:24.872916 1010757 main.go:141] libmachine: (old-k8s-version-788237) Waiting for machine to stop 36/120
	I0116 03:06:25.874555 1010757 main.go:141] libmachine: (old-k8s-version-788237) Waiting for machine to stop 37/120
	I0116 03:06:26.876460 1010757 main.go:141] libmachine: (old-k8s-version-788237) Waiting for machine to stop 38/120
	I0116 03:06:27.878044 1010757 main.go:141] libmachine: (old-k8s-version-788237) Waiting for machine to stop 39/120
	I0116 03:06:28.880216 1010757 main.go:141] libmachine: (old-k8s-version-788237) Waiting for machine to stop 40/120
	I0116 03:06:29.881682 1010757 main.go:141] libmachine: (old-k8s-version-788237) Waiting for machine to stop 41/120
	I0116 03:06:30.883024 1010757 main.go:141] libmachine: (old-k8s-version-788237) Waiting for machine to stop 42/120
	I0116 03:06:31.884584 1010757 main.go:141] libmachine: (old-k8s-version-788237) Waiting for machine to stop 43/120
	I0116 03:06:32.886040 1010757 main.go:141] libmachine: (old-k8s-version-788237) Waiting for machine to stop 44/120
	I0116 03:06:33.887997 1010757 main.go:141] libmachine: (old-k8s-version-788237) Waiting for machine to stop 45/120
	I0116 03:06:34.889572 1010757 main.go:141] libmachine: (old-k8s-version-788237) Waiting for machine to stop 46/120
	I0116 03:06:35.891106 1010757 main.go:141] libmachine: (old-k8s-version-788237) Waiting for machine to stop 47/120
	I0116 03:06:36.892764 1010757 main.go:141] libmachine: (old-k8s-version-788237) Waiting for machine to stop 48/120
	I0116 03:06:37.894386 1010757 main.go:141] libmachine: (old-k8s-version-788237) Waiting for machine to stop 49/120
	I0116 03:06:38.896966 1010757 main.go:141] libmachine: (old-k8s-version-788237) Waiting for machine to stop 50/120
	I0116 03:06:39.898592 1010757 main.go:141] libmachine: (old-k8s-version-788237) Waiting for machine to stop 51/120
	I0116 03:06:40.900021 1010757 main.go:141] libmachine: (old-k8s-version-788237) Waiting for machine to stop 52/120
	I0116 03:06:41.901991 1010757 main.go:141] libmachine: (old-k8s-version-788237) Waiting for machine to stop 53/120
	I0116 03:06:42.903397 1010757 main.go:141] libmachine: (old-k8s-version-788237) Waiting for machine to stop 54/120
	I0116 03:06:43.905896 1010757 main.go:141] libmachine: (old-k8s-version-788237) Waiting for machine to stop 55/120
	I0116 03:06:44.907321 1010757 main.go:141] libmachine: (old-k8s-version-788237) Waiting for machine to stop 56/120
	I0116 03:06:45.908942 1010757 main.go:141] libmachine: (old-k8s-version-788237) Waiting for machine to stop 57/120
	I0116 03:06:46.910376 1010757 main.go:141] libmachine: (old-k8s-version-788237) Waiting for machine to stop 58/120
	I0116 03:06:47.911990 1010757 main.go:141] libmachine: (old-k8s-version-788237) Waiting for machine to stop 59/120
	I0116 03:06:48.913397 1010757 main.go:141] libmachine: (old-k8s-version-788237) Waiting for machine to stop 60/120
	I0116 03:06:49.914892 1010757 main.go:141] libmachine: (old-k8s-version-788237) Waiting for machine to stop 61/120
	I0116 03:06:50.916355 1010757 main.go:141] libmachine: (old-k8s-version-788237) Waiting for machine to stop 62/120
	I0116 03:06:51.917920 1010757 main.go:141] libmachine: (old-k8s-version-788237) Waiting for machine to stop 63/120
	I0116 03:06:52.919521 1010757 main.go:141] libmachine: (old-k8s-version-788237) Waiting for machine to stop 64/120
	I0116 03:06:53.921597 1010757 main.go:141] libmachine: (old-k8s-version-788237) Waiting for machine to stop 65/120
	I0116 03:06:54.922877 1010757 main.go:141] libmachine: (old-k8s-version-788237) Waiting for machine to stop 66/120
	I0116 03:06:55.924460 1010757 main.go:141] libmachine: (old-k8s-version-788237) Waiting for machine to stop 67/120
	I0116 03:06:56.926096 1010757 main.go:141] libmachine: (old-k8s-version-788237) Waiting for machine to stop 68/120
	I0116 03:06:57.928517 1010757 main.go:141] libmachine: (old-k8s-version-788237) Waiting for machine to stop 69/120
	I0116 03:06:58.930790 1010757 main.go:141] libmachine: (old-k8s-version-788237) Waiting for machine to stop 70/120
	I0116 03:06:59.932413 1010757 main.go:141] libmachine: (old-k8s-version-788237) Waiting for machine to stop 71/120
	I0116 03:07:00.933752 1010757 main.go:141] libmachine: (old-k8s-version-788237) Waiting for machine to stop 72/120
	I0116 03:07:01.935290 1010757 main.go:141] libmachine: (old-k8s-version-788237) Waiting for machine to stop 73/120
	I0116 03:07:02.936814 1010757 main.go:141] libmachine: (old-k8s-version-788237) Waiting for machine to stop 74/120
	I0116 03:07:03.938812 1010757 main.go:141] libmachine: (old-k8s-version-788237) Waiting for machine to stop 75/120
	I0116 03:07:04.940342 1010757 main.go:141] libmachine: (old-k8s-version-788237) Waiting for machine to stop 76/120
	I0116 03:07:05.941692 1010757 main.go:141] libmachine: (old-k8s-version-788237) Waiting for machine to stop 77/120
	I0116 03:07:06.943371 1010757 main.go:141] libmachine: (old-k8s-version-788237) Waiting for machine to stop 78/120
	I0116 03:07:07.944685 1010757 main.go:141] libmachine: (old-k8s-version-788237) Waiting for machine to stop 79/120
	I0116 03:07:08.947118 1010757 main.go:141] libmachine: (old-k8s-version-788237) Waiting for machine to stop 80/120
	I0116 03:07:09.948572 1010757 main.go:141] libmachine: (old-k8s-version-788237) Waiting for machine to stop 81/120
	I0116 03:07:10.950044 1010757 main.go:141] libmachine: (old-k8s-version-788237) Waiting for machine to stop 82/120
	I0116 03:07:11.951659 1010757 main.go:141] libmachine: (old-k8s-version-788237) Waiting for machine to stop 83/120
	I0116 03:07:12.953126 1010757 main.go:141] libmachine: (old-k8s-version-788237) Waiting for machine to stop 84/120
	I0116 03:07:13.955389 1010757 main.go:141] libmachine: (old-k8s-version-788237) Waiting for machine to stop 85/120
	I0116 03:07:14.956822 1010757 main.go:141] libmachine: (old-k8s-version-788237) Waiting for machine to stop 86/120
	I0116 03:07:15.958376 1010757 main.go:141] libmachine: (old-k8s-version-788237) Waiting for machine to stop 87/120
	I0116 03:07:16.960034 1010757 main.go:141] libmachine: (old-k8s-version-788237) Waiting for machine to stop 88/120
	I0116 03:07:17.961589 1010757 main.go:141] libmachine: (old-k8s-version-788237) Waiting for machine to stop 89/120
	I0116 03:07:18.963110 1010757 main.go:141] libmachine: (old-k8s-version-788237) Waiting for machine to stop 90/120
	I0116 03:07:19.964733 1010757 main.go:141] libmachine: (old-k8s-version-788237) Waiting for machine to stop 91/120
	I0116 03:07:20.966082 1010757 main.go:141] libmachine: (old-k8s-version-788237) Waiting for machine to stop 92/120
	I0116 03:07:21.967414 1010757 main.go:141] libmachine: (old-k8s-version-788237) Waiting for machine to stop 93/120
	I0116 03:07:22.969236 1010757 main.go:141] libmachine: (old-k8s-version-788237) Waiting for machine to stop 94/120
	I0116 03:07:23.971317 1010757 main.go:141] libmachine: (old-k8s-version-788237) Waiting for machine to stop 95/120
	I0116 03:07:24.972914 1010757 main.go:141] libmachine: (old-k8s-version-788237) Waiting for machine to stop 96/120
	I0116 03:07:25.974354 1010757 main.go:141] libmachine: (old-k8s-version-788237) Waiting for machine to stop 97/120
	I0116 03:07:26.975895 1010757 main.go:141] libmachine: (old-k8s-version-788237) Waiting for machine to stop 98/120
	I0116 03:07:27.977154 1010757 main.go:141] libmachine: (old-k8s-version-788237) Waiting for machine to stop 99/120
	I0116 03:07:28.979595 1010757 main.go:141] libmachine: (old-k8s-version-788237) Waiting for machine to stop 100/120
	I0116 03:07:29.981114 1010757 main.go:141] libmachine: (old-k8s-version-788237) Waiting for machine to stop 101/120
	I0116 03:07:30.982570 1010757 main.go:141] libmachine: (old-k8s-version-788237) Waiting for machine to stop 102/120
	I0116 03:07:31.984264 1010757 main.go:141] libmachine: (old-k8s-version-788237) Waiting for machine to stop 103/120
	I0116 03:07:32.985612 1010757 main.go:141] libmachine: (old-k8s-version-788237) Waiting for machine to stop 104/120
	I0116 03:07:33.987800 1010757 main.go:141] libmachine: (old-k8s-version-788237) Waiting for machine to stop 105/120
	I0116 03:07:34.989319 1010757 main.go:141] libmachine: (old-k8s-version-788237) Waiting for machine to stop 106/120
	I0116 03:07:35.990959 1010757 main.go:141] libmachine: (old-k8s-version-788237) Waiting for machine to stop 107/120
	I0116 03:07:36.992717 1010757 main.go:141] libmachine: (old-k8s-version-788237) Waiting for machine to stop 108/120
	I0116 03:07:37.994153 1010757 main.go:141] libmachine: (old-k8s-version-788237) Waiting for machine to stop 109/120
	I0116 03:07:38.996658 1010757 main.go:141] libmachine: (old-k8s-version-788237) Waiting for machine to stop 110/120
	I0116 03:07:39.998050 1010757 main.go:141] libmachine: (old-k8s-version-788237) Waiting for machine to stop 111/120
	I0116 03:07:41.000514 1010757 main.go:141] libmachine: (old-k8s-version-788237) Waiting for machine to stop 112/120
	I0116 03:07:42.002157 1010757 main.go:141] libmachine: (old-k8s-version-788237) Waiting for machine to stop 113/120
	I0116 03:07:43.003614 1010757 main.go:141] libmachine: (old-k8s-version-788237) Waiting for machine to stop 114/120
	I0116 03:07:44.005945 1010757 main.go:141] libmachine: (old-k8s-version-788237) Waiting for machine to stop 115/120
	I0116 03:07:45.007750 1010757 main.go:141] libmachine: (old-k8s-version-788237) Waiting for machine to stop 116/120
	I0116 03:07:46.009516 1010757 main.go:141] libmachine: (old-k8s-version-788237) Waiting for machine to stop 117/120
	I0116 03:07:47.010930 1010757 main.go:141] libmachine: (old-k8s-version-788237) Waiting for machine to stop 118/120
	I0116 03:07:48.012503 1010757 main.go:141] libmachine: (old-k8s-version-788237) Waiting for machine to stop 119/120
	I0116 03:07:49.013699 1010757 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0116 03:07:49.013765 1010757 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0116 03:07:49.015811 1010757 out.go:177] 
	W0116 03:07:49.017470 1010757 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0116 03:07:49.017486 1010757 out.go:239] * 
	* 
	W0116 03:07:49.020974 1010757 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0116 03:07:49.022721 1010757 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p old-k8s-version-788237 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-788237 -n old-k8s-version-788237
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-788237 -n old-k8s-version-788237: exit status 3 (18.693684252s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0116 03:08:07.718195 1011369 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.91:22: connect: no route to host
	E0116 03:08:07.718218 1011369 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.91:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "old-k8s-version-788237" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Stop (139.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (138.86s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-775571 --alsologtostderr -v=3
E0116 03:07:10.559671  978482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/functional-941139/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p default-k8s-diff-port-775571 --alsologtostderr -v=3: exit status 82 (2m0.296803055s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-diff-port-775571"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0116 03:06:24.253665 1010950 out.go:296] Setting OutFile to fd 1 ...
	I0116 03:06:24.253891 1010950 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 03:06:24.253903 1010950 out.go:309] Setting ErrFile to fd 2...
	I0116 03:06:24.253908 1010950 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 03:06:24.254116 1010950 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17967-971255/.minikube/bin
	I0116 03:06:24.254385 1010950 out.go:303] Setting JSON to false
	I0116 03:06:24.254490 1010950 mustload.go:65] Loading cluster: default-k8s-diff-port-775571
	I0116 03:06:24.254839 1010950 config.go:182] Loaded profile config "default-k8s-diff-port-775571": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 03:06:24.254912 1010950 profile.go:148] Saving config to /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/default-k8s-diff-port-775571/config.json ...
	I0116 03:06:24.255077 1010950 mustload.go:65] Loading cluster: default-k8s-diff-port-775571
	I0116 03:06:24.255187 1010950 config.go:182] Loaded profile config "default-k8s-diff-port-775571": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 03:06:24.255215 1010950 stop.go:39] StopHost: default-k8s-diff-port-775571
	I0116 03:06:24.255685 1010950 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:06:24.255784 1010950 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:06:24.270880 1010950 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36783
	I0116 03:06:24.271435 1010950 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:06:24.272079 1010950 main.go:141] libmachine: Using API Version  1
	I0116 03:06:24.272107 1010950 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:06:24.272505 1010950 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:06:24.275292 1010950 out.go:177] * Stopping node "default-k8s-diff-port-775571"  ...
	I0116 03:06:24.276672 1010950 main.go:141] libmachine: Stopping "default-k8s-diff-port-775571"...
	I0116 03:06:24.276700 1010950 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetState
	I0116 03:06:24.278572 1010950 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .Stop
	I0116 03:06:24.282290 1010950 main.go:141] libmachine: (default-k8s-diff-port-775571) Waiting for machine to stop 0/120
	I0116 03:06:25.283944 1010950 main.go:141] libmachine: (default-k8s-diff-port-775571) Waiting for machine to stop 1/120
	I0116 03:06:26.285380 1010950 main.go:141] libmachine: (default-k8s-diff-port-775571) Waiting for machine to stop 2/120
	I0116 03:06:27.287060 1010950 main.go:141] libmachine: (default-k8s-diff-port-775571) Waiting for machine to stop 3/120
	I0116 03:06:28.288602 1010950 main.go:141] libmachine: (default-k8s-diff-port-775571) Waiting for machine to stop 4/120
	I0116 03:06:29.291047 1010950 main.go:141] libmachine: (default-k8s-diff-port-775571) Waiting for machine to stop 5/120
	I0116 03:06:30.292555 1010950 main.go:141] libmachine: (default-k8s-diff-port-775571) Waiting for machine to stop 6/120
	I0116 03:06:31.294258 1010950 main.go:141] libmachine: (default-k8s-diff-port-775571) Waiting for machine to stop 7/120
	I0116 03:06:32.295600 1010950 main.go:141] libmachine: (default-k8s-diff-port-775571) Waiting for machine to stop 8/120
	I0116 03:06:33.297121 1010950 main.go:141] libmachine: (default-k8s-diff-port-775571) Waiting for machine to stop 9/120
	I0116 03:06:34.299256 1010950 main.go:141] libmachine: (default-k8s-diff-port-775571) Waiting for machine to stop 10/120
	I0116 03:06:35.300495 1010950 main.go:141] libmachine: (default-k8s-diff-port-775571) Waiting for machine to stop 11/120
	I0116 03:06:36.301941 1010950 main.go:141] libmachine: (default-k8s-diff-port-775571) Waiting for machine to stop 12/120
	I0116 03:06:37.303322 1010950 main.go:141] libmachine: (default-k8s-diff-port-775571) Waiting for machine to stop 13/120
	I0116 03:06:38.304825 1010950 main.go:141] libmachine: (default-k8s-diff-port-775571) Waiting for machine to stop 14/120
	I0116 03:06:39.307345 1010950 main.go:141] libmachine: (default-k8s-diff-port-775571) Waiting for machine to stop 15/120
	I0116 03:06:40.308798 1010950 main.go:141] libmachine: (default-k8s-diff-port-775571) Waiting for machine to stop 16/120
	I0116 03:06:41.310523 1010950 main.go:141] libmachine: (default-k8s-diff-port-775571) Waiting for machine to stop 17/120
	I0116 03:06:42.311882 1010950 main.go:141] libmachine: (default-k8s-diff-port-775571) Waiting for machine to stop 18/120
	I0116 03:06:43.313263 1010950 main.go:141] libmachine: (default-k8s-diff-port-775571) Waiting for machine to stop 19/120
	I0116 03:06:44.314544 1010950 main.go:141] libmachine: (default-k8s-diff-port-775571) Waiting for machine to stop 20/120
	I0116 03:06:45.316252 1010950 main.go:141] libmachine: (default-k8s-diff-port-775571) Waiting for machine to stop 21/120
	I0116 03:06:46.317894 1010950 main.go:141] libmachine: (default-k8s-diff-port-775571) Waiting for machine to stop 22/120
	I0116 03:06:47.319572 1010950 main.go:141] libmachine: (default-k8s-diff-port-775571) Waiting for machine to stop 23/120
	I0116 03:06:48.320916 1010950 main.go:141] libmachine: (default-k8s-diff-port-775571) Waiting for machine to stop 24/120
	I0116 03:06:49.323204 1010950 main.go:141] libmachine: (default-k8s-diff-port-775571) Waiting for machine to stop 25/120
	I0116 03:06:50.324983 1010950 main.go:141] libmachine: (default-k8s-diff-port-775571) Waiting for machine to stop 26/120
	I0116 03:06:51.326658 1010950 main.go:141] libmachine: (default-k8s-diff-port-775571) Waiting for machine to stop 27/120
	I0116 03:06:52.329459 1010950 main.go:141] libmachine: (default-k8s-diff-port-775571) Waiting for machine to stop 28/120
	I0116 03:06:53.330746 1010950 main.go:141] libmachine: (default-k8s-diff-port-775571) Waiting for machine to stop 29/120
	I0116 03:06:54.333258 1010950 main.go:141] libmachine: (default-k8s-diff-port-775571) Waiting for machine to stop 30/120
	I0116 03:06:55.334791 1010950 main.go:141] libmachine: (default-k8s-diff-port-775571) Waiting for machine to stop 31/120
	I0116 03:06:56.336205 1010950 main.go:141] libmachine: (default-k8s-diff-port-775571) Waiting for machine to stop 32/120
	I0116 03:06:57.337737 1010950 main.go:141] libmachine: (default-k8s-diff-port-775571) Waiting for machine to stop 33/120
	I0116 03:06:58.339393 1010950 main.go:141] libmachine: (default-k8s-diff-port-775571) Waiting for machine to stop 34/120
	I0116 03:06:59.341376 1010950 main.go:141] libmachine: (default-k8s-diff-port-775571) Waiting for machine to stop 35/120
	I0116 03:07:00.342816 1010950 main.go:141] libmachine: (default-k8s-diff-port-775571) Waiting for machine to stop 36/120
	I0116 03:07:01.344367 1010950 main.go:141] libmachine: (default-k8s-diff-port-775571) Waiting for machine to stop 37/120
	I0116 03:07:02.345921 1010950 main.go:141] libmachine: (default-k8s-diff-port-775571) Waiting for machine to stop 38/120
	I0116 03:07:03.347413 1010950 main.go:141] libmachine: (default-k8s-diff-port-775571) Waiting for machine to stop 39/120
	I0116 03:07:04.348946 1010950 main.go:141] libmachine: (default-k8s-diff-port-775571) Waiting for machine to stop 40/120
	I0116 03:07:05.350307 1010950 main.go:141] libmachine: (default-k8s-diff-port-775571) Waiting for machine to stop 41/120
	I0116 03:07:06.352352 1010950 main.go:141] libmachine: (default-k8s-diff-port-775571) Waiting for machine to stop 42/120
	I0116 03:07:07.353981 1010950 main.go:141] libmachine: (default-k8s-diff-port-775571) Waiting for machine to stop 43/120
	I0116 03:07:08.355500 1010950 main.go:141] libmachine: (default-k8s-diff-port-775571) Waiting for machine to stop 44/120
	I0116 03:07:09.357817 1010950 main.go:141] libmachine: (default-k8s-diff-port-775571) Waiting for machine to stop 45/120
	I0116 03:07:10.359125 1010950 main.go:141] libmachine: (default-k8s-diff-port-775571) Waiting for machine to stop 46/120
	I0116 03:07:11.360573 1010950 main.go:141] libmachine: (default-k8s-diff-port-775571) Waiting for machine to stop 47/120
	I0116 03:07:12.361912 1010950 main.go:141] libmachine: (default-k8s-diff-port-775571) Waiting for machine to stop 48/120
	I0116 03:07:13.363511 1010950 main.go:141] libmachine: (default-k8s-diff-port-775571) Waiting for machine to stop 49/120
	I0116 03:07:14.365303 1010950 main.go:141] libmachine: (default-k8s-diff-port-775571) Waiting for machine to stop 50/120
	I0116 03:07:15.366918 1010950 main.go:141] libmachine: (default-k8s-diff-port-775571) Waiting for machine to stop 51/120
	I0116 03:07:16.368504 1010950 main.go:141] libmachine: (default-k8s-diff-port-775571) Waiting for machine to stop 52/120
	I0116 03:07:17.370163 1010950 main.go:141] libmachine: (default-k8s-diff-port-775571) Waiting for machine to stop 53/120
	I0116 03:07:18.371787 1010950 main.go:141] libmachine: (default-k8s-diff-port-775571) Waiting for machine to stop 54/120
	I0116 03:07:19.373902 1010950 main.go:141] libmachine: (default-k8s-diff-port-775571) Waiting for machine to stop 55/120
	I0116 03:07:20.375209 1010950 main.go:141] libmachine: (default-k8s-diff-port-775571) Waiting for machine to stop 56/120
	I0116 03:07:21.376600 1010950 main.go:141] libmachine: (default-k8s-diff-port-775571) Waiting for machine to stop 57/120
	I0116 03:07:22.378386 1010950 main.go:141] libmachine: (default-k8s-diff-port-775571) Waiting for machine to stop 58/120
	I0116 03:07:23.379967 1010950 main.go:141] libmachine: (default-k8s-diff-port-775571) Waiting for machine to stop 59/120
	I0116 03:07:24.382327 1010950 main.go:141] libmachine: (default-k8s-diff-port-775571) Waiting for machine to stop 60/120
	I0116 03:07:25.383959 1010950 main.go:141] libmachine: (default-k8s-diff-port-775571) Waiting for machine to stop 61/120
	I0116 03:07:26.385378 1010950 main.go:141] libmachine: (default-k8s-diff-port-775571) Waiting for machine to stop 62/120
	I0116 03:07:27.386819 1010950 main.go:141] libmachine: (default-k8s-diff-port-775571) Waiting for machine to stop 63/120
	I0116 03:07:28.388308 1010950 main.go:141] libmachine: (default-k8s-diff-port-775571) Waiting for machine to stop 64/120
	I0116 03:07:29.390181 1010950 main.go:141] libmachine: (default-k8s-diff-port-775571) Waiting for machine to stop 65/120
	I0116 03:07:30.391724 1010950 main.go:141] libmachine: (default-k8s-diff-port-775571) Waiting for machine to stop 66/120
	I0116 03:07:31.393199 1010950 main.go:141] libmachine: (default-k8s-diff-port-775571) Waiting for machine to stop 67/120
	I0116 03:07:32.394850 1010950 main.go:141] libmachine: (default-k8s-diff-port-775571) Waiting for machine to stop 68/120
	I0116 03:07:33.396396 1010950 main.go:141] libmachine: (default-k8s-diff-port-775571) Waiting for machine to stop 69/120
	I0116 03:07:34.398008 1010950 main.go:141] libmachine: (default-k8s-diff-port-775571) Waiting for machine to stop 70/120
	I0116 03:07:35.399762 1010950 main.go:141] libmachine: (default-k8s-diff-port-775571) Waiting for machine to stop 71/120
	I0116 03:07:36.401201 1010950 main.go:141] libmachine: (default-k8s-diff-port-775571) Waiting for machine to stop 72/120
	I0116 03:07:37.402967 1010950 main.go:141] libmachine: (default-k8s-diff-port-775571) Waiting for machine to stop 73/120
	I0116 03:07:38.404365 1010950 main.go:141] libmachine: (default-k8s-diff-port-775571) Waiting for machine to stop 74/120
	I0116 03:07:39.406684 1010950 main.go:141] libmachine: (default-k8s-diff-port-775571) Waiting for machine to stop 75/120
	I0116 03:07:40.408355 1010950 main.go:141] libmachine: (default-k8s-diff-port-775571) Waiting for machine to stop 76/120
	I0116 03:07:41.409748 1010950 main.go:141] libmachine: (default-k8s-diff-port-775571) Waiting for machine to stop 77/120
	I0116 03:07:42.411164 1010950 main.go:141] libmachine: (default-k8s-diff-port-775571) Waiting for machine to stop 78/120
	I0116 03:07:43.412786 1010950 main.go:141] libmachine: (default-k8s-diff-port-775571) Waiting for machine to stop 79/120
	I0116 03:07:44.414348 1010950 main.go:141] libmachine: (default-k8s-diff-port-775571) Waiting for machine to stop 80/120
	I0116 03:07:45.415865 1010950 main.go:141] libmachine: (default-k8s-diff-port-775571) Waiting for machine to stop 81/120
	I0116 03:07:46.417207 1010950 main.go:141] libmachine: (default-k8s-diff-port-775571) Waiting for machine to stop 82/120
	I0116 03:07:47.418789 1010950 main.go:141] libmachine: (default-k8s-diff-port-775571) Waiting for machine to stop 83/120
	I0116 03:07:48.420219 1010950 main.go:141] libmachine: (default-k8s-diff-port-775571) Waiting for machine to stop 84/120
	I0116 03:07:49.421734 1010950 main.go:141] libmachine: (default-k8s-diff-port-775571) Waiting for machine to stop 85/120
	I0116 03:07:50.423139 1010950 main.go:141] libmachine: (default-k8s-diff-port-775571) Waiting for machine to stop 86/120
	I0116 03:07:51.424668 1010950 main.go:141] libmachine: (default-k8s-diff-port-775571) Waiting for machine to stop 87/120
	I0116 03:07:52.426098 1010950 main.go:141] libmachine: (default-k8s-diff-port-775571) Waiting for machine to stop 88/120
	I0116 03:07:53.427863 1010950 main.go:141] libmachine: (default-k8s-diff-port-775571) Waiting for machine to stop 89/120
	I0116 03:07:54.429376 1010950 main.go:141] libmachine: (default-k8s-diff-port-775571) Waiting for machine to stop 90/120
	I0116 03:07:55.431067 1010950 main.go:141] libmachine: (default-k8s-diff-port-775571) Waiting for machine to stop 91/120
	I0116 03:07:56.432517 1010950 main.go:141] libmachine: (default-k8s-diff-port-775571) Waiting for machine to stop 92/120
	I0116 03:07:57.434140 1010950 main.go:141] libmachine: (default-k8s-diff-port-775571) Waiting for machine to stop 93/120
	I0116 03:07:58.435584 1010950 main.go:141] libmachine: (default-k8s-diff-port-775571) Waiting for machine to stop 94/120
	I0116 03:07:59.437169 1010950 main.go:141] libmachine: (default-k8s-diff-port-775571) Waiting for machine to stop 95/120
	I0116 03:08:00.438658 1010950 main.go:141] libmachine: (default-k8s-diff-port-775571) Waiting for machine to stop 96/120
	I0116 03:08:01.440054 1010950 main.go:141] libmachine: (default-k8s-diff-port-775571) Waiting for machine to stop 97/120
	I0116 03:08:02.441556 1010950 main.go:141] libmachine: (default-k8s-diff-port-775571) Waiting for machine to stop 98/120
	I0116 03:08:03.443171 1010950 main.go:141] libmachine: (default-k8s-diff-port-775571) Waiting for machine to stop 99/120
	I0116 03:08:04.445431 1010950 main.go:141] libmachine: (default-k8s-diff-port-775571) Waiting for machine to stop 100/120
	I0116 03:08:05.446849 1010950 main.go:141] libmachine: (default-k8s-diff-port-775571) Waiting for machine to stop 101/120
	I0116 03:08:06.448179 1010950 main.go:141] libmachine: (default-k8s-diff-port-775571) Waiting for machine to stop 102/120
	I0116 03:08:07.449782 1010950 main.go:141] libmachine: (default-k8s-diff-port-775571) Waiting for machine to stop 103/120
	I0116 03:08:08.451393 1010950 main.go:141] libmachine: (default-k8s-diff-port-775571) Waiting for machine to stop 104/120
	I0116 03:08:09.453475 1010950 main.go:141] libmachine: (default-k8s-diff-port-775571) Waiting for machine to stop 105/120
	I0116 03:08:10.454911 1010950 main.go:141] libmachine: (default-k8s-diff-port-775571) Waiting for machine to stop 106/120
	I0116 03:08:11.456305 1010950 main.go:141] libmachine: (default-k8s-diff-port-775571) Waiting for machine to stop 107/120
	I0116 03:08:12.457763 1010950 main.go:141] libmachine: (default-k8s-diff-port-775571) Waiting for machine to stop 108/120
	I0116 03:08:13.459491 1010950 main.go:141] libmachine: (default-k8s-diff-port-775571) Waiting for machine to stop 109/120
	I0116 03:08:14.460928 1010950 main.go:141] libmachine: (default-k8s-diff-port-775571) Waiting for machine to stop 110/120
	I0116 03:08:15.462539 1010950 main.go:141] libmachine: (default-k8s-diff-port-775571) Waiting for machine to stop 111/120
	I0116 03:08:16.464052 1010950 main.go:141] libmachine: (default-k8s-diff-port-775571) Waiting for machine to stop 112/120
	I0116 03:08:17.465721 1010950 main.go:141] libmachine: (default-k8s-diff-port-775571) Waiting for machine to stop 113/120
	I0116 03:08:18.467221 1010950 main.go:141] libmachine: (default-k8s-diff-port-775571) Waiting for machine to stop 114/120
	I0116 03:08:19.469178 1010950 main.go:141] libmachine: (default-k8s-diff-port-775571) Waiting for machine to stop 115/120
	I0116 03:08:20.470602 1010950 main.go:141] libmachine: (default-k8s-diff-port-775571) Waiting for machine to stop 116/120
	I0116 03:08:21.472204 1010950 main.go:141] libmachine: (default-k8s-diff-port-775571) Waiting for machine to stop 117/120
	I0116 03:08:22.473755 1010950 main.go:141] libmachine: (default-k8s-diff-port-775571) Waiting for machine to stop 118/120
	I0116 03:08:23.475095 1010950 main.go:141] libmachine: (default-k8s-diff-port-775571) Waiting for machine to stop 119/120
	I0116 03:08:24.475965 1010950 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0116 03:08:24.476053 1010950 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0116 03:08:24.478197 1010950 out.go:177] 
	W0116 03:08:24.479924 1010950 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0116 03:08:24.479940 1010950 out.go:239] * 
	* 
	W0116 03:08:24.483135 1010950 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0116 03:08:24.484830 1010950 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p default-k8s-diff-port-775571 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-775571 -n default-k8s-diff-port-775571
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-775571 -n default-k8s-diff-port-775571: exit status 3 (18.560152788s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0116 03:08:43.046254 1011746 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.158:22: connect: no route to host
	E0116 03:08:43.046278 1011746 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.158:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-775571" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Stop (138.86s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.42s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-934668 -n no-preload-934668
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-934668 -n no-preload-934668: exit status 3 (3.199515555s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0116 03:07:43.014236 1011245 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.29:22: connect: no route to host
	E0116 03:07:43.014260 1011245 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.29:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-934668 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p no-preload-934668 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.154325552s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.50.29:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p no-preload-934668 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-934668 -n no-preload-934668
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-934668 -n no-preload-934668: exit status 3 (3.061667783s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0116 03:07:52.230400 1011399 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.29:22: connect: no route to host
	E0116 03:07:52.230424 1011399 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.29:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-934668" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.42s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.42s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-480663 -n embed-certs-480663
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-480663 -n embed-certs-480663: exit status 3 (3.199644179s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0116 03:07:44.550204 1011274 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.150:22: connect: no route to host
	E0116 03:07:44.550227 1011274 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.150:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-480663 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-480663 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.154938106s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.61.150:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p embed-certs-480663 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-480663 -n embed-certs-480663
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-480663 -n embed-certs-480663: exit status 3 (3.061299622s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0116 03:07:53.766352 1011430 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.150:22: connect: no route to host
	E0116 03:07:53.766383 1011430 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.150:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-480663" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.42s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (12.42s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-788237 -n old-k8s-version-788237
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-788237 -n old-k8s-version-788237: exit status 3 (3.199420653s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0116 03:08:10.918262 1011570 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.91:22: connect: no route to host
	E0116 03:08:10.918286 1011570 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.91:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-788237 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0116 03:08:12.496246  978482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/addons-321835/client.crt: no such file or directory
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-788237 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.154897079s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.91:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-788237 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-788237 -n old-k8s-version-788237
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-788237 -n old-k8s-version-788237: exit status 3 (3.060743411s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0116 03:08:20.134254 1011640 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.91:22: connect: no route to host
	E0116 03:08:20.134281 1011640 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.91:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "old-k8s-version-788237" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (12.42s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.42s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-775571 -n default-k8s-diff-port-775571
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-775571 -n default-k8s-diff-port-775571: exit status 3 (3.199284486s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0116 03:08:46.246181 1011825 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.158:22: connect: no route to host
	E0116 03:08:46.246206 1011825 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.158:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-775571 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-775571 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.15371065s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.72.158:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-775571 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-775571 -n default-k8s-diff-port-775571
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-775571 -n default-k8s-diff-port-775571: exit status 3 (3.062337247s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0116 03:08:55.462269 1011914 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.158:22: connect: no route to host
	E0116 03:08:55.462306 1011914 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.158:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-775571" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.42s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (543.7s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0116 03:18:12.495743  978482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/addons-321835/client.crt: no such file or directory
E0116 03:19:35.546964  978482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/addons-321835/client.crt: no such file or directory
E0116 03:19:50.170503  978482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/ingress-addon-legacy-473102/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-480663 -n embed-certs-480663
start_stop_delete_test.go:274: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-01-16 03:26:40.914791861 +0000 UTC m=+5178.798618443
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-480663 -n embed-certs-480663
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-480663 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-480663 logs -n 25: (1.865292376s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p cert-expiration-920153                              | cert-expiration-920153       | jenkins | v1.32.0 | 16 Jan 24 03:03 UTC | 16 Jan 24 03:03 UTC |
	| start   | -p embed-certs-480663                                  | embed-certs-480663           | jenkins | v1.32.0 | 16 Jan 24 03:03 UTC | 16 Jan 24 03:05 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| pause   | -p pause-100619                                        | pause-100619                 | jenkins | v1.32.0 | 16 Jan 24 03:03 UTC | 16 Jan 24 03:03 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| unpause | -p pause-100619                                        | pause-100619                 | jenkins | v1.32.0 | 16 Jan 24 03:03 UTC | 16 Jan 24 03:03 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| pause   | -p pause-100619                                        | pause-100619                 | jenkins | v1.32.0 | 16 Jan 24 03:03 UTC | 16 Jan 24 03:03 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| delete  | -p pause-100619                                        | pause-100619                 | jenkins | v1.32.0 | 16 Jan 24 03:03 UTC | 16 Jan 24 03:03 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| delete  | -p pause-100619                                        | pause-100619                 | jenkins | v1.32.0 | 16 Jan 24 03:03 UTC | 16 Jan 24 03:03 UTC |
	| delete  | -p                                                     | disable-driver-mounts-807979 | jenkins | v1.32.0 | 16 Jan 24 03:03 UTC | 16 Jan 24 03:03 UTC |
	|         | disable-driver-mounts-807979                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-775571 | jenkins | v1.32.0 | 16 Jan 24 03:03 UTC | 16 Jan 24 03:06 UTC |
	|         | default-k8s-diff-port-775571                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-934668             | no-preload-934668            | jenkins | v1.32.0 | 16 Jan 24 03:05 UTC | 16 Jan 24 03:05 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-934668                                   | no-preload-934668            | jenkins | v1.32.0 | 16 Jan 24 03:05 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-480663            | embed-certs-480663           | jenkins | v1.32.0 | 16 Jan 24 03:05 UTC | 16 Jan 24 03:05 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-480663                                  | embed-certs-480663           | jenkins | v1.32.0 | 16 Jan 24 03:05 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-788237        | old-k8s-version-788237       | jenkins | v1.32.0 | 16 Jan 24 03:05 UTC | 16 Jan 24 03:05 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-788237                              | old-k8s-version-788237       | jenkins | v1.32.0 | 16 Jan 24 03:05 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-775571  | default-k8s-diff-port-775571 | jenkins | v1.32.0 | 16 Jan 24 03:06 UTC | 16 Jan 24 03:06 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-775571 | jenkins | v1.32.0 | 16 Jan 24 03:06 UTC |                     |
	|         | default-k8s-diff-port-775571                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-934668                  | no-preload-934668            | jenkins | v1.32.0 | 16 Jan 24 03:07 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-480663                 | embed-certs-480663           | jenkins | v1.32.0 | 16 Jan 24 03:07 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-934668                                   | no-preload-934668            | jenkins | v1.32.0 | 16 Jan 24 03:07 UTC | 16 Jan 24 03:24 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| start   | -p embed-certs-480663                                  | embed-certs-480663           | jenkins | v1.32.0 | 16 Jan 24 03:07 UTC | 16 Jan 24 03:17 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-788237             | old-k8s-version-788237       | jenkins | v1.32.0 | 16 Jan 24 03:08 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-788237                              | old-k8s-version-788237       | jenkins | v1.32.0 | 16 Jan 24 03:08 UTC | 16 Jan 24 03:20 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-775571       | default-k8s-diff-port-775571 | jenkins | v1.32.0 | 16 Jan 24 03:08 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-775571 | jenkins | v1.32.0 | 16 Jan 24 03:08 UTC | 16 Jan 24 03:23 UTC |
	|         | default-k8s-diff-port-775571                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/16 03:08:55
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0116 03:08:55.523172 1011955 out.go:296] Setting OutFile to fd 1 ...
	I0116 03:08:55.523367 1011955 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 03:08:55.523379 1011955 out.go:309] Setting ErrFile to fd 2...
	I0116 03:08:55.523384 1011955 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 03:08:55.523559 1011955 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17967-971255/.minikube/bin
	I0116 03:08:55.524097 1011955 out.go:303] Setting JSON to false
	I0116 03:08:55.525108 1011955 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":13885,"bootTime":1705360651,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1048-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0116 03:08:55.525170 1011955 start.go:138] virtualization: kvm guest
	I0116 03:08:55.527591 1011955 out.go:177] * [default-k8s-diff-port-775571] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0116 03:08:55.529034 1011955 out.go:177]   - MINIKUBE_LOCATION=17967
	I0116 03:08:55.529110 1011955 notify.go:220] Checking for updates...
	I0116 03:08:55.530388 1011955 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0116 03:08:55.531787 1011955 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17967-971255/kubeconfig
	I0116 03:08:55.533364 1011955 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17967-971255/.minikube
	I0116 03:08:55.534716 1011955 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0116 03:08:55.535979 1011955 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0116 03:08:55.537715 1011955 config.go:182] Loaded profile config "default-k8s-diff-port-775571": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 03:08:55.538436 1011955 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:08:55.538496 1011955 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:08:55.553180 1011955 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44155
	I0116 03:08:55.553640 1011955 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:08:55.554204 1011955 main.go:141] libmachine: Using API Version  1
	I0116 03:08:55.554227 1011955 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:08:55.554581 1011955 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:08:55.554799 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .DriverName
	I0116 03:08:55.555037 1011955 driver.go:392] Setting default libvirt URI to qemu:///system
	I0116 03:08:55.555380 1011955 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:08:55.555442 1011955 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:08:55.570254 1011955 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46641
	I0116 03:08:55.570682 1011955 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:08:55.571208 1011955 main.go:141] libmachine: Using API Version  1
	I0116 03:08:55.571235 1011955 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:08:55.571622 1011955 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:08:55.571835 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .DriverName
	I0116 03:08:55.608921 1011955 out.go:177] * Using the kvm2 driver based on existing profile
	I0116 03:08:55.610466 1011955 start.go:298] selected driver: kvm2
	I0116 03:08:55.610482 1011955 start.go:902] validating driver "kvm2" against &{Name:default-k8s-diff-port-775571 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuber
netesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-775571 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.72.158 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRe
quested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 03:08:55.610637 1011955 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0116 03:08:55.611416 1011955 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0116 03:08:55.611501 1011955 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17967-971255/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0116 03:08:55.627062 1011955 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0116 03:08:55.627489 1011955 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0116 03:08:55.627568 1011955 cni.go:84] Creating CNI manager for ""
	I0116 03:08:55.627585 1011955 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 03:08:55.627598 1011955 start_flags.go:321] config:
	{Name:default-k8s-diff-port-775571 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-77557
1 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.72.158 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 03:08:55.627820 1011955 iso.go:125] acquiring lock: {Name:mkc0ce0b4b435c5fb7570521b794e5982bb7bf6d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0116 03:08:55.630054 1011955 out.go:177] * Starting control plane node default-k8s-diff-port-775571 in cluster default-k8s-diff-port-775571
	I0116 03:08:56.294081 1011460 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.29:22: connect: no route to host
	I0116 03:08:55.631888 1011955 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0116 03:08:55.631938 1011955 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17967-971255/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0116 03:08:55.631953 1011955 cache.go:56] Caching tarball of preloaded images
	I0116 03:08:55.632083 1011955 preload.go:174] Found /home/jenkins/minikube-integration/17967-971255/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0116 03:08:55.632097 1011955 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0116 03:08:55.632257 1011955 profile.go:148] Saving config to /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/default-k8s-diff-port-775571/config.json ...
	I0116 03:08:55.632487 1011955 start.go:365] acquiring machines lock for default-k8s-diff-port-775571: {Name:mk61734672272adcae1bdaf20e72828e69d219db Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0116 03:08:59.366084 1011460 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.29:22: connect: no route to host
	I0116 03:09:05.446075 1011460 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.29:22: connect: no route to host
	I0116 03:09:08.518122 1011460 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.29:22: connect: no route to host
	I0116 03:09:14.598126 1011460 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.29:22: connect: no route to host
	I0116 03:09:17.670148 1011460 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.29:22: connect: no route to host
	I0116 03:09:23.750127 1011460 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.29:22: connect: no route to host
	I0116 03:09:26.822075 1011460 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.29:22: connect: no route to host
	I0116 03:09:32.902064 1011460 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.29:22: connect: no route to host
	I0116 03:09:35.974222 1011460 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.29:22: connect: no route to host
	I0116 03:09:42.054100 1011460 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.29:22: connect: no route to host
	I0116 03:09:45.126136 1011460 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.29:22: connect: no route to host
	I0116 03:09:51.206133 1011460 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.29:22: connect: no route to host
	I0116 03:09:54.278161 1011460 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.29:22: connect: no route to host
	I0116 03:10:00.358119 1011460 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.29:22: connect: no route to host
	I0116 03:10:03.430197 1011460 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.29:22: connect: no route to host
	I0116 03:10:09.510091 1011460 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.29:22: connect: no route to host
	I0116 03:10:12.582128 1011460 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.29:22: connect: no route to host
	I0116 03:10:18.662160 1011460 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.29:22: connect: no route to host
	I0116 03:10:21.734193 1011460 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.29:22: connect: no route to host
	I0116 03:10:27.814164 1011460 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.29:22: connect: no route to host
	I0116 03:10:30.886157 1011460 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.29:22: connect: no route to host
	I0116 03:10:36.966149 1011460 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.29:22: connect: no route to host
	I0116 03:10:40.038146 1011460 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.29:22: connect: no route to host
	I0116 03:10:46.118124 1011460 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.29:22: connect: no route to host
	I0116 03:10:49.190101 1011460 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.29:22: connect: no route to host
	I0116 03:10:55.269989 1011460 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.29:22: connect: no route to host
	I0116 03:10:58.342124 1011460 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.29:22: connect: no route to host
	I0116 03:11:04.422158 1011460 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.29:22: connect: no route to host
	I0116 03:11:07.494110 1011460 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.29:22: connect: no route to host
	I0116 03:11:13.574119 1011460 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.29:22: connect: no route to host
	I0116 03:11:16.646126 1011460 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.29:22: connect: no route to host
	I0116 03:11:22.726139 1011460 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.29:22: connect: no route to host
	I0116 03:11:25.798139 1011460 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.29:22: connect: no route to host
	I0116 03:11:31.878112 1011460 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.29:22: connect: no route to host
	I0116 03:11:34.950159 1011460 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.29:22: connect: no route to host
	I0116 03:11:41.030157 1011460 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.29:22: connect: no route to host
	I0116 03:11:44.102169 1011460 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.29:22: connect: no route to host
	I0116 03:11:50.182089 1011460 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.29:22: connect: no route to host
	I0116 03:11:53.254213 1011460 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.29:22: connect: no route to host
	I0116 03:11:59.334156 1011460 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.29:22: connect: no route to host
	I0116 03:12:02.406103 1011460 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.29:22: connect: no route to host
	I0116 03:12:08.486171 1011460 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.29:22: connect: no route to host
	I0116 03:12:11.558273 1011460 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.29:22: connect: no route to host
	I0116 03:12:17.638145 1011460 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.29:22: connect: no route to host
	I0116 03:12:20.710185 1011460 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.29:22: connect: no route to host
	I0116 03:12:26.790125 1011460 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.29:22: connect: no route to host
	I0116 03:12:29.794327 1011501 start.go:369] acquired machines lock for "embed-certs-480663" in 4m35.850983647s
	I0116 03:12:29.794418 1011501 start.go:96] Skipping create...Using existing machine configuration
	I0116 03:12:29.794429 1011501 fix.go:54] fixHost starting: 
	I0116 03:12:29.794787 1011501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:12:29.794827 1011501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:12:29.810363 1011501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40317
	I0116 03:12:29.810847 1011501 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:12:29.811350 1011501 main.go:141] libmachine: Using API Version  1
	I0116 03:12:29.811377 1011501 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:12:29.811743 1011501 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:12:29.811943 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .DriverName
	I0116 03:12:29.812098 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetState
	I0116 03:12:29.813836 1011501 fix.go:102] recreateIfNeeded on embed-certs-480663: state=Stopped err=<nil>
	I0116 03:12:29.813863 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .DriverName
	W0116 03:12:29.814085 1011501 fix.go:128] unexpected machine state, will restart: <nil>
	I0116 03:12:29.816073 1011501 out.go:177] * Restarting existing kvm2 VM for "embed-certs-480663" ...
	I0116 03:12:29.792154 1011460 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0116 03:12:29.792196 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetSSHHostname
	I0116 03:12:29.794110 1011460 machine.go:91] provisioned docker machine in 4m37.362238239s
	I0116 03:12:29.794181 1011460 fix.go:56] fixHost completed within 4m37.38762384s
	I0116 03:12:29.794190 1011460 start.go:83] releasing machines lock for "no-preload-934668", held for 4m37.387657639s
	W0116 03:12:29.794218 1011460 start.go:694] error starting host: provision: host is not running
	W0116 03:12:29.794363 1011460 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0116 03:12:29.794373 1011460 start.go:709] Will try again in 5 seconds ...
	I0116 03:12:29.817479 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .Start
	I0116 03:12:29.817644 1011501 main.go:141] libmachine: (embed-certs-480663) Ensuring networks are active...
	I0116 03:12:29.818499 1011501 main.go:141] libmachine: (embed-certs-480663) Ensuring network default is active
	I0116 03:12:29.818799 1011501 main.go:141] libmachine: (embed-certs-480663) Ensuring network mk-embed-certs-480663 is active
	I0116 03:12:29.819175 1011501 main.go:141] libmachine: (embed-certs-480663) Getting domain xml...
	I0116 03:12:29.819788 1011501 main.go:141] libmachine: (embed-certs-480663) Creating domain...
	I0116 03:12:31.021602 1011501 main.go:141] libmachine: (embed-certs-480663) Waiting to get IP...
	I0116 03:12:31.022948 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | domain embed-certs-480663 has defined MAC address 52:54:00:1c:0e:bd in network mk-embed-certs-480663
	I0116 03:12:31.023338 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | unable to find current IP address of domain embed-certs-480663 in network mk-embed-certs-480663
	I0116 03:12:31.023411 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | I0116 03:12:31.023303 1012490 retry.go:31] will retry after 276.789085ms: waiting for machine to come up
	I0116 03:12:31.301941 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | domain embed-certs-480663 has defined MAC address 52:54:00:1c:0e:bd in network mk-embed-certs-480663
	I0116 03:12:31.302463 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | unable to find current IP address of domain embed-certs-480663 in network mk-embed-certs-480663
	I0116 03:12:31.302500 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | I0116 03:12:31.302382 1012490 retry.go:31] will retry after 256.134625ms: waiting for machine to come up
	I0116 03:12:31.560002 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | domain embed-certs-480663 has defined MAC address 52:54:00:1c:0e:bd in network mk-embed-certs-480663
	I0116 03:12:31.560544 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | unable to find current IP address of domain embed-certs-480663 in network mk-embed-certs-480663
	I0116 03:12:31.560571 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | I0116 03:12:31.560490 1012490 retry.go:31] will retry after 439.008262ms: waiting for machine to come up
	I0116 03:12:32.001188 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | domain embed-certs-480663 has defined MAC address 52:54:00:1c:0e:bd in network mk-embed-certs-480663
	I0116 03:12:32.001642 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | unable to find current IP address of domain embed-certs-480663 in network mk-embed-certs-480663
	I0116 03:12:32.001679 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | I0116 03:12:32.001577 1012490 retry.go:31] will retry after 408.362832ms: waiting for machine to come up
	I0116 03:12:32.411058 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | domain embed-certs-480663 has defined MAC address 52:54:00:1c:0e:bd in network mk-embed-certs-480663
	I0116 03:12:32.411391 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | unable to find current IP address of domain embed-certs-480663 in network mk-embed-certs-480663
	I0116 03:12:32.411423 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | I0116 03:12:32.411337 1012490 retry.go:31] will retry after 734.236059ms: waiting for machine to come up
	I0116 03:12:33.146871 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | domain embed-certs-480663 has defined MAC address 52:54:00:1c:0e:bd in network mk-embed-certs-480663
	I0116 03:12:33.147227 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | unable to find current IP address of domain embed-certs-480663 in network mk-embed-certs-480663
	I0116 03:12:33.147255 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | I0116 03:12:33.147168 1012490 retry.go:31] will retry after 675.663635ms: waiting for machine to come up
	I0116 03:12:33.824145 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | domain embed-certs-480663 has defined MAC address 52:54:00:1c:0e:bd in network mk-embed-certs-480663
	I0116 03:12:33.824670 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | unable to find current IP address of domain embed-certs-480663 in network mk-embed-certs-480663
	I0116 03:12:33.824702 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | I0116 03:12:33.824595 1012490 retry.go:31] will retry after 759.820531ms: waiting for machine to come up
	I0116 03:12:34.796140 1011460 start.go:365] acquiring machines lock for no-preload-934668: {Name:mk61734672272adcae1bdaf20e72828e69d219db Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0116 03:12:34.585458 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | domain embed-certs-480663 has defined MAC address 52:54:00:1c:0e:bd in network mk-embed-certs-480663
	I0116 03:12:34.585893 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | unable to find current IP address of domain embed-certs-480663 in network mk-embed-certs-480663
	I0116 03:12:34.585919 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | I0116 03:12:34.585853 1012490 retry.go:31] will retry after 1.421527223s: waiting for machine to come up
	I0116 03:12:36.008778 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | domain embed-certs-480663 has defined MAC address 52:54:00:1c:0e:bd in network mk-embed-certs-480663
	I0116 03:12:36.009237 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | unable to find current IP address of domain embed-certs-480663 in network mk-embed-certs-480663
	I0116 03:12:36.009263 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | I0116 03:12:36.009198 1012490 retry.go:31] will retry after 1.590569463s: waiting for machine to come up
	I0116 03:12:37.601872 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | domain embed-certs-480663 has defined MAC address 52:54:00:1c:0e:bd in network mk-embed-certs-480663
	I0116 03:12:37.602247 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | unable to find current IP address of domain embed-certs-480663 in network mk-embed-certs-480663
	I0116 03:12:37.602280 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | I0116 03:12:37.602215 1012490 retry.go:31] will retry after 1.734508863s: waiting for machine to come up
	I0116 03:12:39.339028 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | domain embed-certs-480663 has defined MAC address 52:54:00:1c:0e:bd in network mk-embed-certs-480663
	I0116 03:12:39.339618 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | unable to find current IP address of domain embed-certs-480663 in network mk-embed-certs-480663
	I0116 03:12:39.339652 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | I0116 03:12:39.339547 1012490 retry.go:31] will retry after 2.357594548s: waiting for machine to come up
	I0116 03:12:41.699172 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | domain embed-certs-480663 has defined MAC address 52:54:00:1c:0e:bd in network mk-embed-certs-480663
	I0116 03:12:41.699607 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | unable to find current IP address of domain embed-certs-480663 in network mk-embed-certs-480663
	I0116 03:12:41.699679 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | I0116 03:12:41.699610 1012490 retry.go:31] will retry after 2.660303994s: waiting for machine to come up
	I0116 03:12:44.362811 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | domain embed-certs-480663 has defined MAC address 52:54:00:1c:0e:bd in network mk-embed-certs-480663
	I0116 03:12:44.363139 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | unable to find current IP address of domain embed-certs-480663 in network mk-embed-certs-480663
	I0116 03:12:44.363173 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | I0116 03:12:44.363109 1012490 retry.go:31] will retry after 3.358505884s: waiting for machine to come up
	I0116 03:12:47.725123 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | domain embed-certs-480663 has defined MAC address 52:54:00:1c:0e:bd in network mk-embed-certs-480663
	I0116 03:12:47.725787 1011501 main.go:141] libmachine: (embed-certs-480663) Found IP for machine: 192.168.61.150
	I0116 03:12:47.725838 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | domain embed-certs-480663 has current primary IP address 192.168.61.150 and MAC address 52:54:00:1c:0e:bd in network mk-embed-certs-480663
	I0116 03:12:47.725847 1011501 main.go:141] libmachine: (embed-certs-480663) Reserving static IP address...
	I0116 03:12:47.726433 1011501 main.go:141] libmachine: (embed-certs-480663) Reserved static IP address: 192.168.61.150
	I0116 03:12:47.726458 1011501 main.go:141] libmachine: (embed-certs-480663) Waiting for SSH to be available...
	I0116 03:12:47.726486 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | found host DHCP lease matching {name: "embed-certs-480663", mac: "52:54:00:1c:0e:bd", ip: "192.168.61.150"} in network mk-embed-certs-480663: {Iface:virbr1 ExpiryTime:2024-01-16 04:04:18 +0000 UTC Type:0 Mac:52:54:00:1c:0e:bd Iaid: IPaddr:192.168.61.150 Prefix:24 Hostname:embed-certs-480663 Clientid:01:52:54:00:1c:0e:bd}
	I0116 03:12:47.726546 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | skip adding static IP to network mk-embed-certs-480663 - found existing host DHCP lease matching {name: "embed-certs-480663", mac: "52:54:00:1c:0e:bd", ip: "192.168.61.150"}
	I0116 03:12:47.726579 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | Getting to WaitForSSH function...
	I0116 03:12:47.728781 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | domain embed-certs-480663 has defined MAC address 52:54:00:1c:0e:bd in network mk-embed-certs-480663
	I0116 03:12:47.729264 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:0e:bd", ip: ""} in network mk-embed-certs-480663: {Iface:virbr1 ExpiryTime:2024-01-16 04:04:18 +0000 UTC Type:0 Mac:52:54:00:1c:0e:bd Iaid: IPaddr:192.168.61.150 Prefix:24 Hostname:embed-certs-480663 Clientid:01:52:54:00:1c:0e:bd}
	I0116 03:12:47.729316 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | domain embed-certs-480663 has defined IP address 192.168.61.150 and MAC address 52:54:00:1c:0e:bd in network mk-embed-certs-480663
	I0116 03:12:47.729447 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | Using SSH client type: external
	I0116 03:12:47.729484 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | Using SSH private key: /home/jenkins/minikube-integration/17967-971255/.minikube/machines/embed-certs-480663/id_rsa (-rw-------)
	I0116 03:12:47.729519 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.150 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17967-971255/.minikube/machines/embed-certs-480663/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0116 03:12:47.729530 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | About to run SSH command:
	I0116 03:12:47.729542 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | exit 0
	I0116 03:12:47.817660 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | SSH cmd err, output: <nil>: 
	I0116 03:12:47.818207 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetConfigRaw
	I0116 03:12:47.818904 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetIP
	I0116 03:12:47.821493 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | domain embed-certs-480663 has defined MAC address 52:54:00:1c:0e:bd in network mk-embed-certs-480663
	I0116 03:12:47.821899 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:0e:bd", ip: ""} in network mk-embed-certs-480663: {Iface:virbr1 ExpiryTime:2024-01-16 04:04:18 +0000 UTC Type:0 Mac:52:54:00:1c:0e:bd Iaid: IPaddr:192.168.61.150 Prefix:24 Hostname:embed-certs-480663 Clientid:01:52:54:00:1c:0e:bd}
	I0116 03:12:47.821938 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | domain embed-certs-480663 has defined IP address 192.168.61.150 and MAC address 52:54:00:1c:0e:bd in network mk-embed-certs-480663
	I0116 03:12:47.822249 1011501 profile.go:148] Saving config to /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/embed-certs-480663/config.json ...
	I0116 03:12:47.822458 1011501 machine.go:88] provisioning docker machine ...
	I0116 03:12:47.822477 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .DriverName
	I0116 03:12:47.822718 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetMachineName
	I0116 03:12:47.822914 1011501 buildroot.go:166] provisioning hostname "embed-certs-480663"
	I0116 03:12:47.822936 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetMachineName
	I0116 03:12:47.823106 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetSSHHostname
	I0116 03:12:47.825414 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | domain embed-certs-480663 has defined MAC address 52:54:00:1c:0e:bd in network mk-embed-certs-480663
	I0116 03:12:47.825772 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:0e:bd", ip: ""} in network mk-embed-certs-480663: {Iface:virbr1 ExpiryTime:2024-01-16 04:04:18 +0000 UTC Type:0 Mac:52:54:00:1c:0e:bd Iaid: IPaddr:192.168.61.150 Prefix:24 Hostname:embed-certs-480663 Clientid:01:52:54:00:1c:0e:bd}
	I0116 03:12:47.825821 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | domain embed-certs-480663 has defined IP address 192.168.61.150 and MAC address 52:54:00:1c:0e:bd in network mk-embed-certs-480663
	I0116 03:12:47.825982 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetSSHPort
	I0116 03:12:47.826176 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetSSHKeyPath
	I0116 03:12:47.826353 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetSSHKeyPath
	I0116 03:12:47.826513 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetSSHUsername
	I0116 03:12:47.826691 1011501 main.go:141] libmachine: Using SSH client type: native
	I0116 03:12:47.827071 1011501 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.61.150 22 <nil> <nil>}
	I0116 03:12:47.827091 1011501 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-480663 && echo "embed-certs-480663" | sudo tee /etc/hostname
	I0116 03:12:47.955360 1011501 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-480663
	
	I0116 03:12:47.955398 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetSSHHostname
	I0116 03:12:47.958259 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | domain embed-certs-480663 has defined MAC address 52:54:00:1c:0e:bd in network mk-embed-certs-480663
	I0116 03:12:47.958575 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:0e:bd", ip: ""} in network mk-embed-certs-480663: {Iface:virbr1 ExpiryTime:2024-01-16 04:04:18 +0000 UTC Type:0 Mac:52:54:00:1c:0e:bd Iaid: IPaddr:192.168.61.150 Prefix:24 Hostname:embed-certs-480663 Clientid:01:52:54:00:1c:0e:bd}
	I0116 03:12:47.958607 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | domain embed-certs-480663 has defined IP address 192.168.61.150 and MAC address 52:54:00:1c:0e:bd in network mk-embed-certs-480663
	I0116 03:12:47.958814 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetSSHPort
	I0116 03:12:47.959044 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetSSHKeyPath
	I0116 03:12:47.959202 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetSSHKeyPath
	I0116 03:12:47.959343 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetSSHUsername
	I0116 03:12:47.959496 1011501 main.go:141] libmachine: Using SSH client type: native
	I0116 03:12:47.959863 1011501 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.61.150 22 <nil> <nil>}
	I0116 03:12:47.959892 1011501 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-480663' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-480663/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-480663' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0116 03:12:48.082423 1011501 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0116 03:12:48.082457 1011501 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17967-971255/.minikube CaCertPath:/home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17967-971255/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17967-971255/.minikube}
	I0116 03:12:48.082515 1011501 buildroot.go:174] setting up certificates
	I0116 03:12:48.082553 1011501 provision.go:83] configureAuth start
	I0116 03:12:48.082569 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetMachineName
	I0116 03:12:48.082866 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetIP
	I0116 03:12:48.085315 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | domain embed-certs-480663 has defined MAC address 52:54:00:1c:0e:bd in network mk-embed-certs-480663
	I0116 03:12:48.085590 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:0e:bd", ip: ""} in network mk-embed-certs-480663: {Iface:virbr1 ExpiryTime:2024-01-16 04:04:18 +0000 UTC Type:0 Mac:52:54:00:1c:0e:bd Iaid: IPaddr:192.168.61.150 Prefix:24 Hostname:embed-certs-480663 Clientid:01:52:54:00:1c:0e:bd}
	I0116 03:12:48.085622 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | domain embed-certs-480663 has defined IP address 192.168.61.150 and MAC address 52:54:00:1c:0e:bd in network mk-embed-certs-480663
	I0116 03:12:48.085766 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetSSHHostname
	I0116 03:12:48.088029 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | domain embed-certs-480663 has defined MAC address 52:54:00:1c:0e:bd in network mk-embed-certs-480663
	I0116 03:12:48.088306 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:0e:bd", ip: ""} in network mk-embed-certs-480663: {Iface:virbr1 ExpiryTime:2024-01-16 04:04:18 +0000 UTC Type:0 Mac:52:54:00:1c:0e:bd Iaid: IPaddr:192.168.61.150 Prefix:24 Hostname:embed-certs-480663 Clientid:01:52:54:00:1c:0e:bd}
	I0116 03:12:48.088331 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | domain embed-certs-480663 has defined IP address 192.168.61.150 and MAC address 52:54:00:1c:0e:bd in network mk-embed-certs-480663
	I0116 03:12:48.088499 1011501 provision.go:138] copyHostCerts
	I0116 03:12:48.088581 1011501 exec_runner.go:144] found /home/jenkins/minikube-integration/17967-971255/.minikube/ca.pem, removing ...
	I0116 03:12:48.088625 1011501 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17967-971255/.minikube/ca.pem
	I0116 03:12:48.088713 1011501 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17967-971255/.minikube/ca.pem (1082 bytes)
	I0116 03:12:48.088856 1011501 exec_runner.go:144] found /home/jenkins/minikube-integration/17967-971255/.minikube/cert.pem, removing ...
	I0116 03:12:48.088866 1011501 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17967-971255/.minikube/cert.pem
	I0116 03:12:48.088903 1011501 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17967-971255/.minikube/cert.pem (1123 bytes)
	I0116 03:12:48.088981 1011501 exec_runner.go:144] found /home/jenkins/minikube-integration/17967-971255/.minikube/key.pem, removing ...
	I0116 03:12:48.088996 1011501 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17967-971255/.minikube/key.pem
	I0116 03:12:48.089030 1011501 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17967-971255/.minikube/key.pem (1675 bytes)
	I0116 03:12:48.089101 1011501 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17967-971255/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca-key.pem org=jenkins.embed-certs-480663 san=[192.168.61.150 192.168.61.150 localhost 127.0.0.1 minikube embed-certs-480663]
	I0116 03:12:48.160830 1011501 provision.go:172] copyRemoteCerts
	I0116 03:12:48.160903 1011501 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0116 03:12:48.160965 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetSSHHostname
	I0116 03:12:48.163939 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | domain embed-certs-480663 has defined MAC address 52:54:00:1c:0e:bd in network mk-embed-certs-480663
	I0116 03:12:48.164277 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:0e:bd", ip: ""} in network mk-embed-certs-480663: {Iface:virbr1 ExpiryTime:2024-01-16 04:04:18 +0000 UTC Type:0 Mac:52:54:00:1c:0e:bd Iaid: IPaddr:192.168.61.150 Prefix:24 Hostname:embed-certs-480663 Clientid:01:52:54:00:1c:0e:bd}
	I0116 03:12:48.164307 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | domain embed-certs-480663 has defined IP address 192.168.61.150 and MAC address 52:54:00:1c:0e:bd in network mk-embed-certs-480663
	I0116 03:12:48.164531 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetSSHPort
	I0116 03:12:48.164805 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetSSHKeyPath
	I0116 03:12:48.165006 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetSSHUsername
	I0116 03:12:48.165166 1011501 sshutil.go:53] new ssh client: &{IP:192.168.61.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/embed-certs-480663/id_rsa Username:docker}
	I0116 03:12:48.256101 1011501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0116 03:12:48.280042 1011501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0116 03:12:48.303724 1011501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0116 03:12:48.326468 1011501 provision.go:86] duration metric: configureAuth took 243.88726ms
	I0116 03:12:48.326506 1011501 buildroot.go:189] setting minikube options for container-runtime
	I0116 03:12:48.326754 1011501 config.go:182] Loaded profile config "embed-certs-480663": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 03:12:48.326876 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetSSHHostname
	I0116 03:12:48.329344 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | domain embed-certs-480663 has defined MAC address 52:54:00:1c:0e:bd in network mk-embed-certs-480663
	I0116 03:12:48.329821 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:0e:bd", ip: ""} in network mk-embed-certs-480663: {Iface:virbr1 ExpiryTime:2024-01-16 04:04:18 +0000 UTC Type:0 Mac:52:54:00:1c:0e:bd Iaid: IPaddr:192.168.61.150 Prefix:24 Hostname:embed-certs-480663 Clientid:01:52:54:00:1c:0e:bd}
	I0116 03:12:48.329859 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | domain embed-certs-480663 has defined IP address 192.168.61.150 and MAC address 52:54:00:1c:0e:bd in network mk-embed-certs-480663
	I0116 03:12:48.329995 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetSSHPort
	I0116 03:12:48.330217 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetSSHKeyPath
	I0116 03:12:48.330434 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetSSHKeyPath
	I0116 03:12:48.330590 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetSSHUsername
	I0116 03:12:48.330744 1011501 main.go:141] libmachine: Using SSH client type: native
	I0116 03:12:48.331080 1011501 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.61.150 22 <nil> <nil>}
	I0116 03:12:48.331099 1011501 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0116 03:12:48.635409 1011501 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0116 03:12:48.635460 1011501 machine.go:91] provisioned docker machine in 812.972689ms
	I0116 03:12:48.635473 1011501 start.go:300] post-start starting for "embed-certs-480663" (driver="kvm2")
	I0116 03:12:48.635489 1011501 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0116 03:12:48.635520 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .DriverName
	I0116 03:12:48.635975 1011501 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0116 03:12:48.636005 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetSSHHostname
	I0116 03:12:48.638568 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | domain embed-certs-480663 has defined MAC address 52:54:00:1c:0e:bd in network mk-embed-certs-480663
	I0116 03:12:48.638912 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:0e:bd", ip: ""} in network mk-embed-certs-480663: {Iface:virbr1 ExpiryTime:2024-01-16 04:04:18 +0000 UTC Type:0 Mac:52:54:00:1c:0e:bd Iaid: IPaddr:192.168.61.150 Prefix:24 Hostname:embed-certs-480663 Clientid:01:52:54:00:1c:0e:bd}
	I0116 03:12:48.638947 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | domain embed-certs-480663 has defined IP address 192.168.61.150 and MAC address 52:54:00:1c:0e:bd in network mk-embed-certs-480663
	I0116 03:12:48.639052 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetSSHPort
	I0116 03:12:48.639272 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetSSHKeyPath
	I0116 03:12:48.639448 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetSSHUsername
	I0116 03:12:48.639608 1011501 sshutil.go:53] new ssh client: &{IP:192.168.61.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/embed-certs-480663/id_rsa Username:docker}
	I0116 03:12:48.729202 1011501 ssh_runner.go:195] Run: cat /etc/os-release
	I0116 03:12:48.733911 1011501 info.go:137] Remote host: Buildroot 2021.02.12
	I0116 03:12:48.733985 1011501 filesync.go:126] Scanning /home/jenkins/minikube-integration/17967-971255/.minikube/addons for local assets ...
	I0116 03:12:48.734062 1011501 filesync.go:126] Scanning /home/jenkins/minikube-integration/17967-971255/.minikube/files for local assets ...
	I0116 03:12:48.734185 1011501 filesync.go:149] local asset: /home/jenkins/minikube-integration/17967-971255/.minikube/files/etc/ssl/certs/9784822.pem -> 9784822.pem in /etc/ssl/certs
	I0116 03:12:48.734437 1011501 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0116 03:12:48.744474 1011501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/files/etc/ssl/certs/9784822.pem --> /etc/ssl/certs/9784822.pem (1708 bytes)
	I0116 03:12:48.767453 1011501 start.go:303] post-start completed in 131.962731ms
	I0116 03:12:48.767483 1011501 fix.go:56] fixHost completed within 18.973054797s
	I0116 03:12:48.767537 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetSSHHostname
	I0116 03:12:48.770091 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | domain embed-certs-480663 has defined MAC address 52:54:00:1c:0e:bd in network mk-embed-certs-480663
	I0116 03:12:48.770364 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:0e:bd", ip: ""} in network mk-embed-certs-480663: {Iface:virbr1 ExpiryTime:2024-01-16 04:04:18 +0000 UTC Type:0 Mac:52:54:00:1c:0e:bd Iaid: IPaddr:192.168.61.150 Prefix:24 Hostname:embed-certs-480663 Clientid:01:52:54:00:1c:0e:bd}
	I0116 03:12:48.770410 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | domain embed-certs-480663 has defined IP address 192.168.61.150 and MAC address 52:54:00:1c:0e:bd in network mk-embed-certs-480663
	I0116 03:12:48.770516 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetSSHPort
	I0116 03:12:48.770700 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetSSHKeyPath
	I0116 03:12:48.770885 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetSSHKeyPath
	I0116 03:12:48.771062 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetSSHUsername
	I0116 03:12:48.771258 1011501 main.go:141] libmachine: Using SSH client type: native
	I0116 03:12:48.771725 1011501 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.61.150 22 <nil> <nil>}
	I0116 03:12:48.771743 1011501 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0116 03:12:48.886832 1011681 start.go:369] acquired machines lock for "old-k8s-version-788237" in 4m28.568927849s
	I0116 03:12:48.886918 1011681 start.go:96] Skipping create...Using existing machine configuration
	I0116 03:12:48.886930 1011681 fix.go:54] fixHost starting: 
	I0116 03:12:48.887453 1011681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:12:48.887501 1011681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:12:48.904045 1011681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43423
	I0116 03:12:48.904557 1011681 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:12:48.905072 1011681 main.go:141] libmachine: Using API Version  1
	I0116 03:12:48.905099 1011681 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:12:48.905518 1011681 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:12:48.905746 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .DriverName
	I0116 03:12:48.905912 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetState
	I0116 03:12:48.907596 1011681 fix.go:102] recreateIfNeeded on old-k8s-version-788237: state=Stopped err=<nil>
	I0116 03:12:48.907628 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .DriverName
	W0116 03:12:48.907820 1011681 fix.go:128] unexpected machine state, will restart: <nil>
	I0116 03:12:48.909761 1011681 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-788237" ...
	I0116 03:12:48.911234 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .Start
	I0116 03:12:48.911413 1011681 main.go:141] libmachine: (old-k8s-version-788237) Ensuring networks are active...
	I0116 03:12:48.912247 1011681 main.go:141] libmachine: (old-k8s-version-788237) Ensuring network default is active
	I0116 03:12:48.912596 1011681 main.go:141] libmachine: (old-k8s-version-788237) Ensuring network mk-old-k8s-version-788237 is active
	I0116 03:12:48.913077 1011681 main.go:141] libmachine: (old-k8s-version-788237) Getting domain xml...
	I0116 03:12:48.913678 1011681 main.go:141] libmachine: (old-k8s-version-788237) Creating domain...
	I0116 03:12:50.157059 1011681 main.go:141] libmachine: (old-k8s-version-788237) Waiting to get IP...
	I0116 03:12:50.158170 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | domain old-k8s-version-788237 has defined MAC address 52:54:00:64:b7:2e in network mk-old-k8s-version-788237
	I0116 03:12:50.158626 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | unable to find current IP address of domain old-k8s-version-788237 in network mk-old-k8s-version-788237
	I0116 03:12:50.158723 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | I0116 03:12:50.158597 1012611 retry.go:31] will retry after 219.259678ms: waiting for machine to come up
	I0116 03:12:48.886627 1011501 main.go:141] libmachine: SSH cmd err, output: <nil>: 1705374768.861682880
	
	I0116 03:12:48.886687 1011501 fix.go:206] guest clock: 1705374768.861682880
	I0116 03:12:48.886698 1011501 fix.go:219] Guest: 2024-01-16 03:12:48.86168288 +0000 UTC Remote: 2024-01-16 03:12:48.767487292 +0000 UTC m=+294.991502995 (delta=94.195588ms)
	I0116 03:12:48.886721 1011501 fix.go:190] guest clock delta is within tolerance: 94.195588ms
	I0116 03:12:48.886726 1011501 start.go:83] releasing machines lock for "embed-certs-480663", held for 19.09234257s
	I0116 03:12:48.886751 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .DriverName
	I0116 03:12:48.887062 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetIP
	I0116 03:12:48.889754 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | domain embed-certs-480663 has defined MAC address 52:54:00:1c:0e:bd in network mk-embed-certs-480663
	I0116 03:12:48.890098 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:0e:bd", ip: ""} in network mk-embed-certs-480663: {Iface:virbr1 ExpiryTime:2024-01-16 04:04:18 +0000 UTC Type:0 Mac:52:54:00:1c:0e:bd Iaid: IPaddr:192.168.61.150 Prefix:24 Hostname:embed-certs-480663 Clientid:01:52:54:00:1c:0e:bd}
	I0116 03:12:48.890128 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | domain embed-certs-480663 has defined IP address 192.168.61.150 and MAC address 52:54:00:1c:0e:bd in network mk-embed-certs-480663
	I0116 03:12:48.890347 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .DriverName
	I0116 03:12:48.890906 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .DriverName
	I0116 03:12:48.891124 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .DriverName
	I0116 03:12:48.891223 1011501 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0116 03:12:48.891269 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetSSHHostname
	I0116 03:12:48.891451 1011501 ssh_runner.go:195] Run: cat /version.json
	I0116 03:12:48.891477 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetSSHHostname
	I0116 03:12:48.894134 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | domain embed-certs-480663 has defined MAC address 52:54:00:1c:0e:bd in network mk-embed-certs-480663
	I0116 03:12:48.894220 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | domain embed-certs-480663 has defined MAC address 52:54:00:1c:0e:bd in network mk-embed-certs-480663
	I0116 03:12:48.894577 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:0e:bd", ip: ""} in network mk-embed-certs-480663: {Iface:virbr1 ExpiryTime:2024-01-16 04:04:18 +0000 UTC Type:0 Mac:52:54:00:1c:0e:bd Iaid: IPaddr:192.168.61.150 Prefix:24 Hostname:embed-certs-480663 Clientid:01:52:54:00:1c:0e:bd}
	I0116 03:12:48.894619 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | domain embed-certs-480663 has defined IP address 192.168.61.150 and MAC address 52:54:00:1c:0e:bd in network mk-embed-certs-480663
	I0116 03:12:48.894646 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:0e:bd", ip: ""} in network mk-embed-certs-480663: {Iface:virbr1 ExpiryTime:2024-01-16 04:04:18 +0000 UTC Type:0 Mac:52:54:00:1c:0e:bd Iaid: IPaddr:192.168.61.150 Prefix:24 Hostname:embed-certs-480663 Clientid:01:52:54:00:1c:0e:bd}
	I0116 03:12:48.894672 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | domain embed-certs-480663 has defined IP address 192.168.61.150 and MAC address 52:54:00:1c:0e:bd in network mk-embed-certs-480663
	I0116 03:12:48.894934 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetSSHPort
	I0116 03:12:48.894944 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetSSHPort
	I0116 03:12:48.895100 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetSSHKeyPath
	I0116 03:12:48.895122 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetSSHKeyPath
	I0116 03:12:48.895200 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetSSHUsername
	I0116 03:12:48.895270 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetSSHUsername
	I0116 03:12:48.895367 1011501 sshutil.go:53] new ssh client: &{IP:192.168.61.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/embed-certs-480663/id_rsa Username:docker}
	I0116 03:12:48.895401 1011501 sshutil.go:53] new ssh client: &{IP:192.168.61.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/embed-certs-480663/id_rsa Username:docker}
	I0116 03:12:48.979839 1011501 ssh_runner.go:195] Run: systemctl --version
	I0116 03:12:49.008683 1011501 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0116 03:12:49.161550 1011501 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0116 03:12:49.167838 1011501 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0116 03:12:49.167937 1011501 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0116 03:12:49.184428 1011501 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0116 03:12:49.184457 1011501 start.go:475] detecting cgroup driver to use...
	I0116 03:12:49.184542 1011501 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0116 03:12:49.202177 1011501 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0116 03:12:49.215021 1011501 docker.go:217] disabling cri-docker service (if available) ...
	I0116 03:12:49.215100 1011501 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0116 03:12:49.230944 1011501 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0116 03:12:49.245401 1011501 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0116 03:12:49.368410 1011501 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0116 03:12:49.490710 1011501 docker.go:233] disabling docker service ...
	I0116 03:12:49.490804 1011501 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0116 03:12:49.504462 1011501 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0116 03:12:49.515523 1011501 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0116 03:12:49.632751 1011501 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0116 03:12:49.769999 1011501 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0116 03:12:49.785053 1011501 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0116 03:12:49.803377 1011501 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0116 03:12:49.803436 1011501 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:12:49.812729 1011501 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0116 03:12:49.812804 1011501 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:12:49.822106 1011501 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:12:49.831270 1011501 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:12:49.840256 1011501 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0116 03:12:49.849610 1011501 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0116 03:12:49.858638 1011501 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0116 03:12:49.858713 1011501 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0116 03:12:49.872437 1011501 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0116 03:12:49.882932 1011501 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0116 03:12:50.003747 1011501 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0116 03:12:50.178808 1011501 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0116 03:12:50.178901 1011501 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0116 03:12:50.184631 1011501 start.go:543] Will wait 60s for crictl version
	I0116 03:12:50.184708 1011501 ssh_runner.go:195] Run: which crictl
	I0116 03:12:50.189104 1011501 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0116 03:12:50.226713 1011501 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0116 03:12:50.226833 1011501 ssh_runner.go:195] Run: crio --version
	I0116 03:12:50.285581 1011501 ssh_runner.go:195] Run: crio --version
	I0116 03:12:50.336274 1011501 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0116 03:12:50.337928 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetIP
	I0116 03:12:50.340938 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | domain embed-certs-480663 has defined MAC address 52:54:00:1c:0e:bd in network mk-embed-certs-480663
	I0116 03:12:50.341389 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:0e:bd", ip: ""} in network mk-embed-certs-480663: {Iface:virbr1 ExpiryTime:2024-01-16 04:04:18 +0000 UTC Type:0 Mac:52:54:00:1c:0e:bd Iaid: IPaddr:192.168.61.150 Prefix:24 Hostname:embed-certs-480663 Clientid:01:52:54:00:1c:0e:bd}
	I0116 03:12:50.341434 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | domain embed-certs-480663 has defined IP address 192.168.61.150 and MAC address 52:54:00:1c:0e:bd in network mk-embed-certs-480663
	I0116 03:12:50.341707 1011501 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0116 03:12:50.346116 1011501 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 03:12:50.358498 1011501 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0116 03:12:50.358562 1011501 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 03:12:50.399016 1011501 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0116 03:12:50.399102 1011501 ssh_runner.go:195] Run: which lz4
	I0116 03:12:50.403562 1011501 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0116 03:12:50.407754 1011501 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0116 03:12:50.407781 1011501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0116 03:12:52.338554 1011501 crio.go:444] Took 1.935021 seconds to copy over tarball
	I0116 03:12:52.338657 1011501 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0116 03:12:50.379220 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | domain old-k8s-version-788237 has defined MAC address 52:54:00:64:b7:2e in network mk-old-k8s-version-788237
	I0116 03:12:50.379668 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | unable to find current IP address of domain old-k8s-version-788237 in network mk-old-k8s-version-788237
	I0116 03:12:50.379707 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | I0116 03:12:50.379617 1012611 retry.go:31] will retry after 265.569137ms: waiting for machine to come up
	I0116 03:12:50.647311 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | domain old-k8s-version-788237 has defined MAC address 52:54:00:64:b7:2e in network mk-old-k8s-version-788237
	I0116 03:12:50.648272 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | unable to find current IP address of domain old-k8s-version-788237 in network mk-old-k8s-version-788237
	I0116 03:12:50.648308 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | I0116 03:12:50.648165 1012611 retry.go:31] will retry after 322.357919ms: waiting for machine to come up
	I0116 03:12:50.971860 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | domain old-k8s-version-788237 has defined MAC address 52:54:00:64:b7:2e in network mk-old-k8s-version-788237
	I0116 03:12:50.972437 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | unable to find current IP address of domain old-k8s-version-788237 in network mk-old-k8s-version-788237
	I0116 03:12:50.972466 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | I0116 03:12:50.972414 1012611 retry.go:31] will retry after 554.899929ms: waiting for machine to come up
	I0116 03:12:51.529304 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | domain old-k8s-version-788237 has defined MAC address 52:54:00:64:b7:2e in network mk-old-k8s-version-788237
	I0116 03:12:51.529854 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | unable to find current IP address of domain old-k8s-version-788237 in network mk-old-k8s-version-788237
	I0116 03:12:51.529881 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | I0116 03:12:51.529781 1012611 retry.go:31] will retry after 666.131492ms: waiting for machine to come up
	I0116 03:12:52.197244 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | domain old-k8s-version-788237 has defined MAC address 52:54:00:64:b7:2e in network mk-old-k8s-version-788237
	I0116 03:12:52.197715 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | unable to find current IP address of domain old-k8s-version-788237 in network mk-old-k8s-version-788237
	I0116 03:12:52.197747 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | I0116 03:12:52.197677 1012611 retry.go:31] will retry after 905.276637ms: waiting for machine to come up
	I0116 03:12:53.104496 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | domain old-k8s-version-788237 has defined MAC address 52:54:00:64:b7:2e in network mk-old-k8s-version-788237
	I0116 03:12:53.105075 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | unable to find current IP address of domain old-k8s-version-788237 in network mk-old-k8s-version-788237
	I0116 03:12:53.105113 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | I0116 03:12:53.105018 1012611 retry.go:31] will retry after 849.59257ms: waiting for machine to come up
	I0116 03:12:53.956756 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | domain old-k8s-version-788237 has defined MAC address 52:54:00:64:b7:2e in network mk-old-k8s-version-788237
	I0116 03:12:53.957265 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | unable to find current IP address of domain old-k8s-version-788237 in network mk-old-k8s-version-788237
	I0116 03:12:53.957310 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | I0116 03:12:53.957214 1012611 retry.go:31] will retry after 1.208772763s: waiting for machine to come up
	I0116 03:12:55.168258 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | domain old-k8s-version-788237 has defined MAC address 52:54:00:64:b7:2e in network mk-old-k8s-version-788237
	I0116 03:12:55.168715 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | unable to find current IP address of domain old-k8s-version-788237 in network mk-old-k8s-version-788237
	I0116 03:12:55.168750 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | I0116 03:12:55.168656 1012611 retry.go:31] will retry after 1.842317385s: waiting for machine to come up
	I0116 03:12:55.368146 1011501 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.02945237s)
	I0116 03:12:55.368186 1011501 crio.go:451] Took 3.029602 seconds to extract the tarball
	I0116 03:12:55.368197 1011501 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0116 03:12:55.409542 1011501 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 03:12:55.468263 1011501 crio.go:496] all images are preloaded for cri-o runtime.
	I0116 03:12:55.468298 1011501 cache_images.go:84] Images are preloaded, skipping loading
	I0116 03:12:55.468401 1011501 ssh_runner.go:195] Run: crio config
	I0116 03:12:55.534437 1011501 cni.go:84] Creating CNI manager for ""
	I0116 03:12:55.534473 1011501 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 03:12:55.534500 1011501 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0116 03:12:55.534554 1011501 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.150 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-480663 NodeName:embed-certs-480663 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.150"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.150 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0116 03:12:55.534761 1011501 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.150
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-480663"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.150
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.150"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0116 03:12:55.534856 1011501 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-480663 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.150
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-480663 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0116 03:12:55.534953 1011501 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0116 03:12:55.550549 1011501 binaries.go:44] Found k8s binaries, skipping transfer
	I0116 03:12:55.550643 1011501 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0116 03:12:55.560831 1011501 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0116 03:12:55.578611 1011501 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0116 03:12:55.600405 1011501 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I0116 03:12:55.620622 1011501 ssh_runner.go:195] Run: grep 192.168.61.150	control-plane.minikube.internal$ /etc/hosts
	I0116 03:12:55.625483 1011501 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.150	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 03:12:55.638353 1011501 certs.go:56] Setting up /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/embed-certs-480663 for IP: 192.168.61.150
	I0116 03:12:55.638404 1011501 certs.go:190] acquiring lock for shared ca certs: {Name:mk7c5920652340a030ac1e36e0fe21cc8437c491 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:12:55.638588 1011501 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17967-971255/.minikube/ca.key
	I0116 03:12:55.638649 1011501 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17967-971255/.minikube/proxy-client-ca.key
	I0116 03:12:55.638772 1011501 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/embed-certs-480663/client.key
	I0116 03:12:55.638852 1011501 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/embed-certs-480663/apiserver.key.2512ac4f
	I0116 03:12:55.638933 1011501 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/embed-certs-480663/proxy-client.key
	I0116 03:12:55.639122 1011501 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/home/jenkins/minikube-integration/17967-971255/.minikube/certs/978482.pem (1338 bytes)
	W0116 03:12:55.639164 1011501 certs.go:433] ignoring /home/jenkins/minikube-integration/17967-971255/.minikube/certs/home/jenkins/minikube-integration/17967-971255/.minikube/certs/978482_empty.pem, impossibly tiny 0 bytes
	I0116 03:12:55.639180 1011501 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca-key.pem (1675 bytes)
	I0116 03:12:55.639217 1011501 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca.pem (1082 bytes)
	I0116 03:12:55.639254 1011501 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/home/jenkins/minikube-integration/17967-971255/.minikube/certs/cert.pem (1123 bytes)
	I0116 03:12:55.639286 1011501 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/home/jenkins/minikube-integration/17967-971255/.minikube/certs/key.pem (1675 bytes)
	I0116 03:12:55.639341 1011501 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-971255/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17967-971255/.minikube/files/etc/ssl/certs/9784822.pem (1708 bytes)
	I0116 03:12:55.640395 1011501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/embed-certs-480663/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0116 03:12:55.667612 1011501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/embed-certs-480663/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0116 03:12:55.692576 1011501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/embed-certs-480663/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0116 03:12:55.717257 1011501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/embed-certs-480663/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0116 03:12:55.741983 1011501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0116 03:12:55.766577 1011501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0116 03:12:55.792372 1011501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0116 03:12:55.817385 1011501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0116 03:12:55.843037 1011501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/files/etc/ssl/certs/9784822.pem --> /usr/share/ca-certificates/9784822.pem (1708 bytes)
	I0116 03:12:55.873486 1011501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0116 03:12:55.898499 1011501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/certs/978482.pem --> /usr/share/ca-certificates/978482.pem (1338 bytes)
	I0116 03:12:55.925406 1011501 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0116 03:12:55.945389 1011501 ssh_runner.go:195] Run: openssl version
	I0116 03:12:55.951579 1011501 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9784822.pem && ln -fs /usr/share/ca-certificates/9784822.pem /etc/ssl/certs/9784822.pem"
	I0116 03:12:55.963228 1011501 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9784822.pem
	I0116 03:12:55.968375 1011501 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 16 02:10 /usr/share/ca-certificates/9784822.pem
	I0116 03:12:55.968448 1011501 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9784822.pem
	I0116 03:12:55.974792 1011501 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9784822.pem /etc/ssl/certs/3ec20f2e.0"
	I0116 03:12:55.986496 1011501 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0116 03:12:55.998112 1011501 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:12:56.003308 1011501 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 16 02:01 /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:12:56.003397 1011501 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:12:56.009406 1011501 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0116 03:12:56.022123 1011501 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/978482.pem && ln -fs /usr/share/ca-certificates/978482.pem /etc/ssl/certs/978482.pem"
	I0116 03:12:56.035041 1011501 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/978482.pem
	I0116 03:12:56.040564 1011501 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 16 02:10 /usr/share/ca-certificates/978482.pem
	I0116 03:12:56.040636 1011501 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/978482.pem
	I0116 03:12:56.047058 1011501 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/978482.pem /etc/ssl/certs/51391683.0"
	I0116 03:12:56.059998 1011501 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0116 03:12:56.065241 1011501 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0116 03:12:56.071918 1011501 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0116 03:12:56.078512 1011501 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0116 03:12:56.085645 1011501 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0116 03:12:56.092405 1011501 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0116 03:12:56.099010 1011501 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0116 03:12:56.105679 1011501 kubeadm.go:404] StartCluster: {Name:embed-certs-480663 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.4 ClusterName:embed-certs-480663 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.150 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 03:12:56.105773 1011501 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0116 03:12:56.105859 1011501 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 03:12:56.153053 1011501 cri.go:89] found id: ""
	I0116 03:12:56.153168 1011501 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0116 03:12:56.165415 1011501 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0116 03:12:56.165448 1011501 kubeadm.go:636] restartCluster start
	I0116 03:12:56.165516 1011501 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0116 03:12:56.175884 1011501 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:12:56.177147 1011501 kubeconfig.go:92] found "embed-certs-480663" server: "https://192.168.61.150:8443"
	I0116 03:12:56.179924 1011501 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0116 03:12:56.189868 1011501 api_server.go:166] Checking apiserver status ...
	I0116 03:12:56.189935 1011501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:12:56.202554 1011501 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:12:56.690001 1011501 api_server.go:166] Checking apiserver status ...
	I0116 03:12:56.690087 1011501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:12:56.702873 1011501 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:12:57.190439 1011501 api_server.go:166] Checking apiserver status ...
	I0116 03:12:57.190526 1011501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:12:57.203483 1011501 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:12:57.691004 1011501 api_server.go:166] Checking apiserver status ...
	I0116 03:12:57.691089 1011501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:12:57.705628 1011501 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:12:58.190127 1011501 api_server.go:166] Checking apiserver status ...
	I0116 03:12:58.190268 1011501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:12:58.203066 1011501 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:12:58.690714 1011501 api_server.go:166] Checking apiserver status ...
	I0116 03:12:58.690836 1011501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:12:58.703512 1011501 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:12:57.013734 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | domain old-k8s-version-788237 has defined MAC address 52:54:00:64:b7:2e in network mk-old-k8s-version-788237
	I0116 03:12:57.014338 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | unable to find current IP address of domain old-k8s-version-788237 in network mk-old-k8s-version-788237
	I0116 03:12:57.014374 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | I0116 03:12:57.014291 1012611 retry.go:31] will retry after 1.812964487s: waiting for machine to come up
	I0116 03:12:58.828551 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | domain old-k8s-version-788237 has defined MAC address 52:54:00:64:b7:2e in network mk-old-k8s-version-788237
	I0116 03:12:58.829042 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | unable to find current IP address of domain old-k8s-version-788237 in network mk-old-k8s-version-788237
	I0116 03:12:58.829068 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | I0116 03:12:58.828972 1012611 retry.go:31] will retry after 2.844481084s: waiting for machine to come up
	I0116 03:12:59.190193 1011501 api_server.go:166] Checking apiserver status ...
	I0116 03:12:59.190305 1011501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:12:59.202672 1011501 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:12:59.690192 1011501 api_server.go:166] Checking apiserver status ...
	I0116 03:12:59.690304 1011501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:12:59.702988 1011501 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:00.190097 1011501 api_server.go:166] Checking apiserver status ...
	I0116 03:13:00.190194 1011501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:00.202817 1011501 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:00.690356 1011501 api_server.go:166] Checking apiserver status ...
	I0116 03:13:00.690469 1011501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:00.703381 1011501 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:01.190016 1011501 api_server.go:166] Checking apiserver status ...
	I0116 03:13:01.190103 1011501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:01.205508 1011501 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:01.689888 1011501 api_server.go:166] Checking apiserver status ...
	I0116 03:13:01.689982 1011501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:01.706681 1011501 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:02.190049 1011501 api_server.go:166] Checking apiserver status ...
	I0116 03:13:02.190151 1011501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:02.206668 1011501 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:02.690222 1011501 api_server.go:166] Checking apiserver status ...
	I0116 03:13:02.690361 1011501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:02.706881 1011501 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:03.189909 1011501 api_server.go:166] Checking apiserver status ...
	I0116 03:13:03.190004 1011501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:03.203138 1011501 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:03.690789 1011501 api_server.go:166] Checking apiserver status ...
	I0116 03:13:03.690907 1011501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:03.703489 1011501 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:01.674784 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | domain old-k8s-version-788237 has defined MAC address 52:54:00:64:b7:2e in network mk-old-k8s-version-788237
	I0116 03:13:01.675368 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | unable to find current IP address of domain old-k8s-version-788237 in network mk-old-k8s-version-788237
	I0116 03:13:01.675395 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | I0116 03:13:01.675337 1012611 retry.go:31] will retry after 3.198176955s: waiting for machine to come up
	I0116 03:13:04.875399 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | domain old-k8s-version-788237 has defined MAC address 52:54:00:64:b7:2e in network mk-old-k8s-version-788237
	I0116 03:13:04.875880 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | unable to find current IP address of domain old-k8s-version-788237 in network mk-old-k8s-version-788237
	I0116 03:13:04.875911 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | I0116 03:13:04.875824 1012611 retry.go:31] will retry after 3.762316841s: waiting for machine to come up
	I0116 03:13:04.190804 1011501 api_server.go:166] Checking apiserver status ...
	I0116 03:13:04.190926 1011501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:04.203114 1011501 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:04.690805 1011501 api_server.go:166] Checking apiserver status ...
	I0116 03:13:04.690935 1011501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:04.703456 1011501 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:05.190648 1011501 api_server.go:166] Checking apiserver status ...
	I0116 03:13:05.190760 1011501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:05.203129 1011501 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:05.690744 1011501 api_server.go:166] Checking apiserver status ...
	I0116 03:13:05.690892 1011501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:05.703526 1011501 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:06.190070 1011501 api_server.go:166] Checking apiserver status ...
	I0116 03:13:06.190217 1011501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:06.202457 1011501 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:06.202494 1011501 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0116 03:13:06.202504 1011501 kubeadm.go:1135] stopping kube-system containers ...
	I0116 03:13:06.202517 1011501 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0116 03:13:06.202598 1011501 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 03:13:06.241146 1011501 cri.go:89] found id: ""
	I0116 03:13:06.241255 1011501 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0116 03:13:06.257465 1011501 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0116 03:13:06.267655 1011501 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0116 03:13:06.267728 1011501 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0116 03:13:06.277601 1011501 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0116 03:13:06.277628 1011501 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:13:06.388578 1011501 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:13:07.024945 1011501 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:13:07.210419 1011501 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:13:07.275175 1011501 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:13:07.353969 1011501 api_server.go:52] waiting for apiserver process to appear ...
	I0116 03:13:07.354074 1011501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:13:07.854253 1011501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:13:08.354855 1011501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:13:10.035188 1011955 start.go:369] acquired machines lock for "default-k8s-diff-port-775571" in 4m14.402660122s
	I0116 03:13:10.035270 1011955 start.go:96] Skipping create...Using existing machine configuration
	I0116 03:13:10.035278 1011955 fix.go:54] fixHost starting: 
	I0116 03:13:10.035719 1011955 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:13:10.035767 1011955 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:13:10.054435 1011955 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35337
	I0116 03:13:10.054968 1011955 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:13:10.055812 1011955 main.go:141] libmachine: Using API Version  1
	I0116 03:13:10.055849 1011955 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:13:10.056304 1011955 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:13:10.056546 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .DriverName
	I0116 03:13:10.056719 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetState
	I0116 03:13:10.058431 1011955 fix.go:102] recreateIfNeeded on default-k8s-diff-port-775571: state=Stopped err=<nil>
	I0116 03:13:10.058467 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .DriverName
	W0116 03:13:10.058666 1011955 fix.go:128] unexpected machine state, will restart: <nil>
	I0116 03:13:10.060742 1011955 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-775571" ...
	I0116 03:13:08.642785 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | domain old-k8s-version-788237 has defined MAC address 52:54:00:64:b7:2e in network mk-old-k8s-version-788237
	I0116 03:13:08.643327 1011681 main.go:141] libmachine: (old-k8s-version-788237) Found IP for machine: 192.168.39.91
	I0116 03:13:08.643356 1011681 main.go:141] libmachine: (old-k8s-version-788237) Reserving static IP address...
	I0116 03:13:08.643376 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | domain old-k8s-version-788237 has current primary IP address 192.168.39.91 and MAC address 52:54:00:64:b7:2e in network mk-old-k8s-version-788237
	I0116 03:13:08.643757 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | found host DHCP lease matching {name: "old-k8s-version-788237", mac: "52:54:00:64:b7:2e", ip: "192.168.39.91"} in network mk-old-k8s-version-788237: {Iface:virbr3 ExpiryTime:2024-01-16 04:13:01 +0000 UTC Type:0 Mac:52:54:00:64:b7:2e Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:old-k8s-version-788237 Clientid:01:52:54:00:64:b7:2e}
	I0116 03:13:08.643780 1011681 main.go:141] libmachine: (old-k8s-version-788237) Reserved static IP address: 192.168.39.91
	I0116 03:13:08.643798 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | skip adding static IP to network mk-old-k8s-version-788237 - found existing host DHCP lease matching {name: "old-k8s-version-788237", mac: "52:54:00:64:b7:2e", ip: "192.168.39.91"}
	I0116 03:13:08.643810 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | Getting to WaitForSSH function...
	I0116 03:13:08.643819 1011681 main.go:141] libmachine: (old-k8s-version-788237) Waiting for SSH to be available...
	I0116 03:13:08.646037 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | domain old-k8s-version-788237 has defined MAC address 52:54:00:64:b7:2e in network mk-old-k8s-version-788237
	I0116 03:13:08.646391 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:b7:2e", ip: ""} in network mk-old-k8s-version-788237: {Iface:virbr3 ExpiryTime:2024-01-16 04:13:01 +0000 UTC Type:0 Mac:52:54:00:64:b7:2e Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:old-k8s-version-788237 Clientid:01:52:54:00:64:b7:2e}
	I0116 03:13:08.646437 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | domain old-k8s-version-788237 has defined IP address 192.168.39.91 and MAC address 52:54:00:64:b7:2e in network mk-old-k8s-version-788237
	I0116 03:13:08.646519 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | Using SSH client type: external
	I0116 03:13:08.646553 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | Using SSH private key: /home/jenkins/minikube-integration/17967-971255/.minikube/machines/old-k8s-version-788237/id_rsa (-rw-------)
	I0116 03:13:08.646581 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.91 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17967-971255/.minikube/machines/old-k8s-version-788237/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0116 03:13:08.646591 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | About to run SSH command:
	I0116 03:13:08.646599 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | exit 0
	I0116 03:13:08.738009 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | SSH cmd err, output: <nil>: 
	I0116 03:13:08.738363 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetConfigRaw
	I0116 03:13:08.739116 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetIP
	I0116 03:13:08.741759 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | domain old-k8s-version-788237 has defined MAC address 52:54:00:64:b7:2e in network mk-old-k8s-version-788237
	I0116 03:13:08.742196 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:b7:2e", ip: ""} in network mk-old-k8s-version-788237: {Iface:virbr3 ExpiryTime:2024-01-16 04:13:01 +0000 UTC Type:0 Mac:52:54:00:64:b7:2e Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:old-k8s-version-788237 Clientid:01:52:54:00:64:b7:2e}
	I0116 03:13:08.742235 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | domain old-k8s-version-788237 has defined IP address 192.168.39.91 and MAC address 52:54:00:64:b7:2e in network mk-old-k8s-version-788237
	I0116 03:13:08.742479 1011681 profile.go:148] Saving config to /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/old-k8s-version-788237/config.json ...
	I0116 03:13:08.742682 1011681 machine.go:88] provisioning docker machine ...
	I0116 03:13:08.742701 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .DriverName
	I0116 03:13:08.742937 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetMachineName
	I0116 03:13:08.743154 1011681 buildroot.go:166] provisioning hostname "old-k8s-version-788237"
	I0116 03:13:08.743184 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetMachineName
	I0116 03:13:08.743338 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetSSHHostname
	I0116 03:13:08.745489 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | domain old-k8s-version-788237 has defined MAC address 52:54:00:64:b7:2e in network mk-old-k8s-version-788237
	I0116 03:13:08.745856 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:b7:2e", ip: ""} in network mk-old-k8s-version-788237: {Iface:virbr3 ExpiryTime:2024-01-16 04:13:01 +0000 UTC Type:0 Mac:52:54:00:64:b7:2e Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:old-k8s-version-788237 Clientid:01:52:54:00:64:b7:2e}
	I0116 03:13:08.745897 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | domain old-k8s-version-788237 has defined IP address 192.168.39.91 and MAC address 52:54:00:64:b7:2e in network mk-old-k8s-version-788237
	I0116 03:13:08.746073 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetSSHPort
	I0116 03:13:08.746292 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetSSHKeyPath
	I0116 03:13:08.746426 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetSSHKeyPath
	I0116 03:13:08.746580 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetSSHUsername
	I0116 03:13:08.746791 1011681 main.go:141] libmachine: Using SSH client type: native
	I0116 03:13:08.747298 1011681 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.91 22 <nil> <nil>}
	I0116 03:13:08.747322 1011681 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-788237 && echo "old-k8s-version-788237" | sudo tee /etc/hostname
	I0116 03:13:08.878928 1011681 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-788237
	
	I0116 03:13:08.878966 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetSSHHostname
	I0116 03:13:08.882019 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | domain old-k8s-version-788237 has defined MAC address 52:54:00:64:b7:2e in network mk-old-k8s-version-788237
	I0116 03:13:08.882417 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:b7:2e", ip: ""} in network mk-old-k8s-version-788237: {Iface:virbr3 ExpiryTime:2024-01-16 04:13:01 +0000 UTC Type:0 Mac:52:54:00:64:b7:2e Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:old-k8s-version-788237 Clientid:01:52:54:00:64:b7:2e}
	I0116 03:13:08.882468 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | domain old-k8s-version-788237 has defined IP address 192.168.39.91 and MAC address 52:54:00:64:b7:2e in network mk-old-k8s-version-788237
	I0116 03:13:08.882564 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetSSHPort
	I0116 03:13:08.882806 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetSSHKeyPath
	I0116 03:13:08.883022 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetSSHKeyPath
	I0116 03:13:08.883202 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetSSHUsername
	I0116 03:13:08.883384 1011681 main.go:141] libmachine: Using SSH client type: native
	I0116 03:13:08.883704 1011681 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.91 22 <nil> <nil>}
	I0116 03:13:08.883723 1011681 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-788237' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-788237/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-788237' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0116 03:13:09.011161 1011681 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0116 03:13:09.011209 1011681 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17967-971255/.minikube CaCertPath:/home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17967-971255/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17967-971255/.minikube}
	I0116 03:13:09.011245 1011681 buildroot.go:174] setting up certificates
	I0116 03:13:09.011261 1011681 provision.go:83] configureAuth start
	I0116 03:13:09.011275 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetMachineName
	I0116 03:13:09.011649 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetIP
	I0116 03:13:09.014580 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | domain old-k8s-version-788237 has defined MAC address 52:54:00:64:b7:2e in network mk-old-k8s-version-788237
	I0116 03:13:09.014920 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:b7:2e", ip: ""} in network mk-old-k8s-version-788237: {Iface:virbr3 ExpiryTime:2024-01-16 04:13:01 +0000 UTC Type:0 Mac:52:54:00:64:b7:2e Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:old-k8s-version-788237 Clientid:01:52:54:00:64:b7:2e}
	I0116 03:13:09.014954 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | domain old-k8s-version-788237 has defined IP address 192.168.39.91 and MAC address 52:54:00:64:b7:2e in network mk-old-k8s-version-788237
	I0116 03:13:09.015107 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetSSHHostname
	I0116 03:13:09.017381 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | domain old-k8s-version-788237 has defined MAC address 52:54:00:64:b7:2e in network mk-old-k8s-version-788237
	I0116 03:13:09.017701 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:b7:2e", ip: ""} in network mk-old-k8s-version-788237: {Iface:virbr3 ExpiryTime:2024-01-16 04:13:01 +0000 UTC Type:0 Mac:52:54:00:64:b7:2e Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:old-k8s-version-788237 Clientid:01:52:54:00:64:b7:2e}
	I0116 03:13:09.017731 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | domain old-k8s-version-788237 has defined IP address 192.168.39.91 and MAC address 52:54:00:64:b7:2e in network mk-old-k8s-version-788237
	I0116 03:13:09.017854 1011681 provision.go:138] copyHostCerts
	I0116 03:13:09.017937 1011681 exec_runner.go:144] found /home/jenkins/minikube-integration/17967-971255/.minikube/ca.pem, removing ...
	I0116 03:13:09.017951 1011681 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17967-971255/.minikube/ca.pem
	I0116 03:13:09.018028 1011681 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17967-971255/.minikube/ca.pem (1082 bytes)
	I0116 03:13:09.018175 1011681 exec_runner.go:144] found /home/jenkins/minikube-integration/17967-971255/.minikube/cert.pem, removing ...
	I0116 03:13:09.018190 1011681 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17967-971255/.minikube/cert.pem
	I0116 03:13:09.018223 1011681 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17967-971255/.minikube/cert.pem (1123 bytes)
	I0116 03:13:09.018307 1011681 exec_runner.go:144] found /home/jenkins/minikube-integration/17967-971255/.minikube/key.pem, removing ...
	I0116 03:13:09.018318 1011681 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17967-971255/.minikube/key.pem
	I0116 03:13:09.018342 1011681 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17967-971255/.minikube/key.pem (1675 bytes)
	I0116 03:13:09.018403 1011681 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17967-971255/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-788237 san=[192.168.39.91 192.168.39.91 localhost 127.0.0.1 minikube old-k8s-version-788237]
	I0116 03:13:09.280154 1011681 provision.go:172] copyRemoteCerts
	I0116 03:13:09.280224 1011681 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0116 03:13:09.280252 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetSSHHostname
	I0116 03:13:09.283485 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | domain old-k8s-version-788237 has defined MAC address 52:54:00:64:b7:2e in network mk-old-k8s-version-788237
	I0116 03:13:09.283829 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:b7:2e", ip: ""} in network mk-old-k8s-version-788237: {Iface:virbr3 ExpiryTime:2024-01-16 04:13:01 +0000 UTC Type:0 Mac:52:54:00:64:b7:2e Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:old-k8s-version-788237 Clientid:01:52:54:00:64:b7:2e}
	I0116 03:13:09.283862 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | domain old-k8s-version-788237 has defined IP address 192.168.39.91 and MAC address 52:54:00:64:b7:2e in network mk-old-k8s-version-788237
	I0116 03:13:09.284193 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetSSHPort
	I0116 03:13:09.284454 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetSSHKeyPath
	I0116 03:13:09.284599 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetSSHUsername
	I0116 03:13:09.284787 1011681 sshutil.go:53] new ssh client: &{IP:192.168.39.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/old-k8s-version-788237/id_rsa Username:docker}
	I0116 03:13:09.382440 1011681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0116 03:13:09.410373 1011681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0116 03:13:09.435625 1011681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0116 03:13:09.460028 1011681 provision.go:86] duration metric: configureAuth took 448.744455ms
	I0116 03:13:09.460066 1011681 buildroot.go:189] setting minikube options for container-runtime
	I0116 03:13:09.460309 1011681 config.go:182] Loaded profile config "old-k8s-version-788237": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0116 03:13:09.460422 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetSSHHostname
	I0116 03:13:09.463079 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | domain old-k8s-version-788237 has defined MAC address 52:54:00:64:b7:2e in network mk-old-k8s-version-788237
	I0116 03:13:09.463354 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:b7:2e", ip: ""} in network mk-old-k8s-version-788237: {Iface:virbr3 ExpiryTime:2024-01-16 04:13:01 +0000 UTC Type:0 Mac:52:54:00:64:b7:2e Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:old-k8s-version-788237 Clientid:01:52:54:00:64:b7:2e}
	I0116 03:13:09.463396 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | domain old-k8s-version-788237 has defined IP address 192.168.39.91 and MAC address 52:54:00:64:b7:2e in network mk-old-k8s-version-788237
	I0116 03:13:09.463526 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetSSHPort
	I0116 03:13:09.463784 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetSSHKeyPath
	I0116 03:13:09.464087 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetSSHKeyPath
	I0116 03:13:09.464272 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetSSHUsername
	I0116 03:13:09.464458 1011681 main.go:141] libmachine: Using SSH client type: native
	I0116 03:13:09.464814 1011681 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.91 22 <nil> <nil>}
	I0116 03:13:09.464838 1011681 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0116 03:13:09.783889 1011681 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0116 03:13:09.783923 1011681 machine.go:91] provisioned docker machine in 1.041225615s
	I0116 03:13:09.783938 1011681 start.go:300] post-start starting for "old-k8s-version-788237" (driver="kvm2")
	I0116 03:13:09.783955 1011681 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0116 03:13:09.783981 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .DriverName
	I0116 03:13:09.784410 1011681 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0116 03:13:09.784452 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetSSHHostname
	I0116 03:13:09.787427 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | domain old-k8s-version-788237 has defined MAC address 52:54:00:64:b7:2e in network mk-old-k8s-version-788237
	I0116 03:13:09.787841 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:b7:2e", ip: ""} in network mk-old-k8s-version-788237: {Iface:virbr3 ExpiryTime:2024-01-16 04:13:01 +0000 UTC Type:0 Mac:52:54:00:64:b7:2e Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:old-k8s-version-788237 Clientid:01:52:54:00:64:b7:2e}
	I0116 03:13:09.787879 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | domain old-k8s-version-788237 has defined IP address 192.168.39.91 and MAC address 52:54:00:64:b7:2e in network mk-old-k8s-version-788237
	I0116 03:13:09.788022 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetSSHPort
	I0116 03:13:09.788233 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetSSHKeyPath
	I0116 03:13:09.788409 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetSSHUsername
	I0116 03:13:09.788566 1011681 sshutil.go:53] new ssh client: &{IP:192.168.39.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/old-k8s-version-788237/id_rsa Username:docker}
	I0116 03:13:09.875964 1011681 ssh_runner.go:195] Run: cat /etc/os-release
	I0116 03:13:09.880665 1011681 info.go:137] Remote host: Buildroot 2021.02.12
	I0116 03:13:09.880700 1011681 filesync.go:126] Scanning /home/jenkins/minikube-integration/17967-971255/.minikube/addons for local assets ...
	I0116 03:13:09.880782 1011681 filesync.go:126] Scanning /home/jenkins/minikube-integration/17967-971255/.minikube/files for local assets ...
	I0116 03:13:09.880879 1011681 filesync.go:149] local asset: /home/jenkins/minikube-integration/17967-971255/.minikube/files/etc/ssl/certs/9784822.pem -> 9784822.pem in /etc/ssl/certs
	I0116 03:13:09.881013 1011681 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0116 03:13:09.890286 1011681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/files/etc/ssl/certs/9784822.pem --> /etc/ssl/certs/9784822.pem (1708 bytes)
	I0116 03:13:09.913554 1011681 start.go:303] post-start completed in 129.596487ms
	I0116 03:13:09.913586 1011681 fix.go:56] fixHost completed within 21.026657085s
	I0116 03:13:09.913610 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetSSHHostname
	I0116 03:13:09.916767 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | domain old-k8s-version-788237 has defined MAC address 52:54:00:64:b7:2e in network mk-old-k8s-version-788237
	I0116 03:13:09.917228 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:b7:2e", ip: ""} in network mk-old-k8s-version-788237: {Iface:virbr3 ExpiryTime:2024-01-16 04:13:01 +0000 UTC Type:0 Mac:52:54:00:64:b7:2e Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:old-k8s-version-788237 Clientid:01:52:54:00:64:b7:2e}
	I0116 03:13:09.917265 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | domain old-k8s-version-788237 has defined IP address 192.168.39.91 and MAC address 52:54:00:64:b7:2e in network mk-old-k8s-version-788237
	I0116 03:13:09.917551 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetSSHPort
	I0116 03:13:09.917759 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetSSHKeyPath
	I0116 03:13:09.918017 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetSSHKeyPath
	I0116 03:13:09.918222 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetSSHUsername
	I0116 03:13:09.918418 1011681 main.go:141] libmachine: Using SSH client type: native
	I0116 03:13:09.918793 1011681 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.91 22 <nil> <nil>}
	I0116 03:13:09.918816 1011681 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0116 03:13:10.035012 1011681 main.go:141] libmachine: SSH cmd err, output: <nil>: 1705374789.980840898
	
	I0116 03:13:10.035040 1011681 fix.go:206] guest clock: 1705374789.980840898
	I0116 03:13:10.035051 1011681 fix.go:219] Guest: 2024-01-16 03:13:09.980840898 +0000 UTC Remote: 2024-01-16 03:13:09.913590445 +0000 UTC m=+289.770143089 (delta=67.250453ms)
	I0116 03:13:10.035083 1011681 fix.go:190] guest clock delta is within tolerance: 67.250453ms
	I0116 03:13:10.035093 1011681 start.go:83] releasing machines lock for "old-k8s-version-788237", held for 21.148206908s
	I0116 03:13:10.035126 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .DriverName
	I0116 03:13:10.035410 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetIP
	I0116 03:13:10.038396 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | domain old-k8s-version-788237 has defined MAC address 52:54:00:64:b7:2e in network mk-old-k8s-version-788237
	I0116 03:13:10.038745 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:b7:2e", ip: ""} in network mk-old-k8s-version-788237: {Iface:virbr3 ExpiryTime:2024-01-16 04:13:01 +0000 UTC Type:0 Mac:52:54:00:64:b7:2e Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:old-k8s-version-788237 Clientid:01:52:54:00:64:b7:2e}
	I0116 03:13:10.038781 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | domain old-k8s-version-788237 has defined IP address 192.168.39.91 and MAC address 52:54:00:64:b7:2e in network mk-old-k8s-version-788237
	I0116 03:13:10.039048 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .DriverName
	I0116 03:13:10.039659 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .DriverName
	I0116 03:13:10.039881 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .DriverName
	I0116 03:13:10.039978 1011681 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0116 03:13:10.040024 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetSSHHostname
	I0116 03:13:10.040135 1011681 ssh_runner.go:195] Run: cat /version.json
	I0116 03:13:10.040160 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetSSHHostname
	I0116 03:13:10.043099 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | domain old-k8s-version-788237 has defined MAC address 52:54:00:64:b7:2e in network mk-old-k8s-version-788237
	I0116 03:13:10.043326 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | domain old-k8s-version-788237 has defined MAC address 52:54:00:64:b7:2e in network mk-old-k8s-version-788237
	I0116 03:13:10.043459 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:b7:2e", ip: ""} in network mk-old-k8s-version-788237: {Iface:virbr3 ExpiryTime:2024-01-16 04:13:01 +0000 UTC Type:0 Mac:52:54:00:64:b7:2e Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:old-k8s-version-788237 Clientid:01:52:54:00:64:b7:2e}
	I0116 03:13:10.043482 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | domain old-k8s-version-788237 has defined IP address 192.168.39.91 and MAC address 52:54:00:64:b7:2e in network mk-old-k8s-version-788237
	I0116 03:13:10.043655 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetSSHPort
	I0116 03:13:10.043756 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:b7:2e", ip: ""} in network mk-old-k8s-version-788237: {Iface:virbr3 ExpiryTime:2024-01-16 04:13:01 +0000 UTC Type:0 Mac:52:54:00:64:b7:2e Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:old-k8s-version-788237 Clientid:01:52:54:00:64:b7:2e}
	I0116 03:13:10.043802 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | domain old-k8s-version-788237 has defined IP address 192.168.39.91 and MAC address 52:54:00:64:b7:2e in network mk-old-k8s-version-788237
	I0116 03:13:10.044001 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetSSHPort
	I0116 03:13:10.044018 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetSSHKeyPath
	I0116 03:13:10.044241 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetSSHKeyPath
	I0116 03:13:10.044249 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetSSHUsername
	I0116 03:13:10.044409 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetSSHUsername
	I0116 03:13:10.044498 1011681 sshutil.go:53] new ssh client: &{IP:192.168.39.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/old-k8s-version-788237/id_rsa Username:docker}
	I0116 03:13:10.044528 1011681 sshutil.go:53] new ssh client: &{IP:192.168.39.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/old-k8s-version-788237/id_rsa Username:docker}
	I0116 03:13:10.131865 1011681 ssh_runner.go:195] Run: systemctl --version
	I0116 03:13:10.160343 1011681 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0116 03:13:10.062248 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .Start
	I0116 03:13:10.062475 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Ensuring networks are active...
	I0116 03:13:10.063470 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Ensuring network default is active
	I0116 03:13:10.063800 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Ensuring network mk-default-k8s-diff-port-775571 is active
	I0116 03:13:10.064263 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Getting domain xml...
	I0116 03:13:10.065010 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Creating domain...
	I0116 03:13:10.316936 1011681 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0116 03:13:10.324330 1011681 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0116 03:13:10.324409 1011681 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0116 03:13:10.343057 1011681 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0116 03:13:10.343090 1011681 start.go:475] detecting cgroup driver to use...
	I0116 03:13:10.343184 1011681 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0116 03:13:10.359325 1011681 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0116 03:13:10.377310 1011681 docker.go:217] disabling cri-docker service (if available) ...
	I0116 03:13:10.377386 1011681 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0116 03:13:10.396512 1011681 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0116 03:13:10.416458 1011681 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0116 03:13:10.540518 1011681 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0116 03:13:10.671885 1011681 docker.go:233] disabling docker service ...
	I0116 03:13:10.672042 1011681 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0116 03:13:10.689182 1011681 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0116 03:13:10.705235 1011681 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0116 03:13:10.826545 1011681 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0116 03:13:10.941453 1011681 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0116 03:13:10.954337 1011681 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0116 03:13:10.974814 1011681 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I0116 03:13:10.974894 1011681 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:13:10.984741 1011681 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0116 03:13:10.984811 1011681 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:13:10.994451 1011681 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:13:11.004459 1011681 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:13:11.014409 1011681 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0116 03:13:11.025057 1011681 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0116 03:13:11.033911 1011681 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0116 03:13:11.034003 1011681 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0116 03:13:11.048044 1011681 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0116 03:13:11.056724 1011681 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0116 03:13:11.180914 1011681 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0116 03:13:11.369876 1011681 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0116 03:13:11.369971 1011681 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0116 03:13:11.375568 1011681 start.go:543] Will wait 60s for crictl version
	I0116 03:13:11.375638 1011681 ssh_runner.go:195] Run: which crictl
	I0116 03:13:11.379992 1011681 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0116 03:13:11.422734 1011681 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0116 03:13:11.422837 1011681 ssh_runner.go:195] Run: crio --version
	I0116 03:13:11.477909 1011681 ssh_runner.go:195] Run: crio --version
	I0116 03:13:11.536220 1011681 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I0116 03:13:08.855145 1011501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:13:09.355119 1011501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:13:09.854553 1011501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:13:09.882463 1011501 api_server.go:72] duration metric: took 2.528495988s to wait for apiserver process to appear ...
	I0116 03:13:09.882491 1011501 api_server.go:88] waiting for apiserver healthz status ...
	I0116 03:13:09.882516 1011501 api_server.go:253] Checking apiserver healthz at https://192.168.61.150:8443/healthz ...
	I0116 03:13:09.883135 1011501 api_server.go:269] stopped: https://192.168.61.150:8443/healthz: Get "https://192.168.61.150:8443/healthz": dial tcp 192.168.61.150:8443: connect: connection refused
	I0116 03:13:10.382909 1011501 api_server.go:253] Checking apiserver healthz at https://192.168.61.150:8443/healthz ...
	I0116 03:13:11.537589 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetIP
	I0116 03:13:11.540815 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | domain old-k8s-version-788237 has defined MAC address 52:54:00:64:b7:2e in network mk-old-k8s-version-788237
	I0116 03:13:11.541169 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:b7:2e", ip: ""} in network mk-old-k8s-version-788237: {Iface:virbr3 ExpiryTime:2024-01-16 04:13:01 +0000 UTC Type:0 Mac:52:54:00:64:b7:2e Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:old-k8s-version-788237 Clientid:01:52:54:00:64:b7:2e}
	I0116 03:13:11.541199 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | domain old-k8s-version-788237 has defined IP address 192.168.39.91 and MAC address 52:54:00:64:b7:2e in network mk-old-k8s-version-788237
	I0116 03:13:11.541459 1011681 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0116 03:13:11.546215 1011681 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 03:13:11.562291 1011681 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0116 03:13:11.562378 1011681 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 03:13:11.603542 1011681 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0116 03:13:11.603627 1011681 ssh_runner.go:195] Run: which lz4
	I0116 03:13:11.607873 1011681 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0116 03:13:11.613536 1011681 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0116 03:13:11.613577 1011681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I0116 03:13:13.454225 1011681 crio.go:444] Took 1.846391 seconds to copy over tarball
	I0116 03:13:13.454334 1011681 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0116 03:13:11.425638 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Waiting to get IP...
	I0116 03:13:11.426748 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | domain default-k8s-diff-port-775571 has defined MAC address 52:54:00:4b:bc:45 in network mk-default-k8s-diff-port-775571
	I0116 03:13:11.427214 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | unable to find current IP address of domain default-k8s-diff-port-775571 in network mk-default-k8s-diff-port-775571
	I0116 03:13:11.427314 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | I0116 03:13:11.427187 1012757 retry.go:31] will retry after 234.45504ms: waiting for machine to come up
	I0116 03:13:11.663924 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | domain default-k8s-diff-port-775571 has defined MAC address 52:54:00:4b:bc:45 in network mk-default-k8s-diff-port-775571
	I0116 03:13:11.664619 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | unable to find current IP address of domain default-k8s-diff-port-775571 in network mk-default-k8s-diff-port-775571
	I0116 03:13:11.664664 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | I0116 03:13:11.664556 1012757 retry.go:31] will retry after 318.711044ms: waiting for machine to come up
	I0116 03:13:11.985398 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | domain default-k8s-diff-port-775571 has defined MAC address 52:54:00:4b:bc:45 in network mk-default-k8s-diff-port-775571
	I0116 03:13:11.985941 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | unable to find current IP address of domain default-k8s-diff-port-775571 in network mk-default-k8s-diff-port-775571
	I0116 03:13:11.985978 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | I0116 03:13:11.985917 1012757 retry.go:31] will retry after 463.405848ms: waiting for machine to come up
	I0116 03:13:12.450776 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | domain default-k8s-diff-port-775571 has defined MAC address 52:54:00:4b:bc:45 in network mk-default-k8s-diff-port-775571
	I0116 03:13:12.451335 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | unable to find current IP address of domain default-k8s-diff-port-775571 in network mk-default-k8s-diff-port-775571
	I0116 03:13:12.451361 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | I0116 03:13:12.451270 1012757 retry.go:31] will retry after 428.299543ms: waiting for machine to come up
	I0116 03:13:12.881383 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | domain default-k8s-diff-port-775571 has defined MAC address 52:54:00:4b:bc:45 in network mk-default-k8s-diff-port-775571
	I0116 03:13:12.881910 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | unable to find current IP address of domain default-k8s-diff-port-775571 in network mk-default-k8s-diff-port-775571
	I0116 03:13:12.881946 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | I0116 03:13:12.881856 1012757 retry.go:31] will retry after 564.023978ms: waiting for machine to come up
	I0116 03:13:13.447917 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | domain default-k8s-diff-port-775571 has defined MAC address 52:54:00:4b:bc:45 in network mk-default-k8s-diff-port-775571
	I0116 03:13:13.448436 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | unable to find current IP address of domain default-k8s-diff-port-775571 in network mk-default-k8s-diff-port-775571
	I0116 03:13:13.448492 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | I0116 03:13:13.448405 1012757 retry.go:31] will retry after 694.298162ms: waiting for machine to come up
	I0116 03:13:14.144469 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | domain default-k8s-diff-port-775571 has defined MAC address 52:54:00:4b:bc:45 in network mk-default-k8s-diff-port-775571
	I0116 03:13:14.145037 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | unable to find current IP address of domain default-k8s-diff-port-775571 in network mk-default-k8s-diff-port-775571
	I0116 03:13:14.145084 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | I0116 03:13:14.144953 1012757 retry.go:31] will retry after 821.505467ms: waiting for machine to come up
	I0116 03:13:14.967941 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | domain default-k8s-diff-port-775571 has defined MAC address 52:54:00:4b:bc:45 in network mk-default-k8s-diff-port-775571
	I0116 03:13:14.968577 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | unable to find current IP address of domain default-k8s-diff-port-775571 in network mk-default-k8s-diff-port-775571
	I0116 03:13:14.968611 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | I0116 03:13:14.968486 1012757 retry.go:31] will retry after 1.079929031s: waiting for machine to come up
	I0116 03:13:14.175997 1011501 api_server.go:279] https://192.168.61.150:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0116 03:13:14.176046 1011501 api_server.go:103] status: https://192.168.61.150:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0116 03:13:14.176064 1011501 api_server.go:253] Checking apiserver healthz at https://192.168.61.150:8443/healthz ...
	I0116 03:13:14.244918 1011501 api_server.go:279] https://192.168.61.150:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0116 03:13:14.244979 1011501 api_server.go:103] status: https://192.168.61.150:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0116 03:13:14.383226 1011501 api_server.go:253] Checking apiserver healthz at https://192.168.61.150:8443/healthz ...
	I0116 03:13:14.390006 1011501 api_server.go:279] https://192.168.61.150:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0116 03:13:14.390047 1011501 api_server.go:103] status: https://192.168.61.150:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0116 03:13:14.883209 1011501 api_server.go:253] Checking apiserver healthz at https://192.168.61.150:8443/healthz ...
	I0116 03:13:14.889127 1011501 api_server.go:279] https://192.168.61.150:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0116 03:13:14.889170 1011501 api_server.go:103] status: https://192.168.61.150:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0116 03:13:15.382688 1011501 api_server.go:253] Checking apiserver healthz at https://192.168.61.150:8443/healthz ...
	I0116 03:13:15.399515 1011501 api_server.go:279] https://192.168.61.150:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0116 03:13:15.399554 1011501 api_server.go:103] status: https://192.168.61.150:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0116 03:13:15.883088 1011501 api_server.go:253] Checking apiserver healthz at https://192.168.61.150:8443/healthz ...
	I0116 03:13:15.891853 1011501 api_server.go:279] https://192.168.61.150:8443/healthz returned 200:
	ok
	I0116 03:13:15.905636 1011501 api_server.go:141] control plane version: v1.28.4
	I0116 03:13:15.905683 1011501 api_server.go:131] duration metric: took 6.023183183s to wait for apiserver health ...
	I0116 03:13:15.905697 1011501 cni.go:84] Creating CNI manager for ""
	I0116 03:13:15.905706 1011501 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 03:13:15.907935 1011501 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0116 03:13:15.909466 1011501 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0116 03:13:15.922375 1011501 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0116 03:13:15.952930 1011501 system_pods.go:43] waiting for kube-system pods to appear ...
	I0116 03:13:15.964437 1011501 system_pods.go:59] 8 kube-system pods found
	I0116 03:13:15.964485 1011501 system_pods.go:61] "coredns-5dd5756b68-stqh5" [adbcef96-218b-42ed-9daf-72c274be0690] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0116 03:13:15.964494 1011501 system_pods.go:61] "etcd-embed-certs-480663" [6694af11-3b1a-4b84-adcb-3416b87a076f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0116 03:13:15.964502 1011501 system_pods.go:61] "kube-apiserver-embed-certs-480663" [3a3d0e5d-35eb-4f2f-9686-b17af19bc777] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0116 03:13:15.964508 1011501 system_pods.go:61] "kube-controller-manager-embed-certs-480663" [729b671f-23b9-409b-9f11-d6992f2355c7] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0116 03:13:15.964514 1011501 system_pods.go:61] "kube-proxy-j4786" [aabb98a7-fe55-4105-a5d2-c1e312464107] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0116 03:13:15.964520 1011501 system_pods.go:61] "kube-scheduler-embed-certs-480663" [11baa5d7-6e7b-4a25-be6f-e7975b51023d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0116 03:13:15.964525 1011501 system_pods.go:61] "metrics-server-57f55c9bc5-7d2fh" [512cf579-f335-4995-8721-74bb84da776e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:13:15.964541 1011501 system_pods.go:61] "storage-provisioner" [da59ff59-869f-48a9-a5c5-c95bb807cbcf] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0116 03:13:15.964549 1011501 system_pods.go:74] duration metric: took 11.584104ms to wait for pod list to return data ...
	I0116 03:13:15.964560 1011501 node_conditions.go:102] verifying NodePressure condition ...
	I0116 03:13:15.971265 1011501 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 03:13:15.971310 1011501 node_conditions.go:123] node cpu capacity is 2
	I0116 03:13:15.971324 1011501 node_conditions.go:105] duration metric: took 6.758143ms to run NodePressure ...
	I0116 03:13:15.971346 1011501 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:13:16.332558 1011501 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0116 03:13:16.343354 1011501 kubeadm.go:787] kubelet initialised
	I0116 03:13:16.343392 1011501 kubeadm.go:788] duration metric: took 10.793951ms waiting for restarted kubelet to initialise ...
	I0116 03:13:16.343403 1011501 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:13:16.370777 1011501 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-stqh5" in "kube-system" namespace to be "Ready" ...
	I0116 03:13:16.393556 1011501 pod_ready.go:97] node "embed-certs-480663" hosting pod "coredns-5dd5756b68-stqh5" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-480663" has status "Ready":"False"
	I0116 03:13:16.393599 1011501 pod_ready.go:81] duration metric: took 22.772202ms waiting for pod "coredns-5dd5756b68-stqh5" in "kube-system" namespace to be "Ready" ...
	E0116 03:13:16.393613 1011501 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-480663" hosting pod "coredns-5dd5756b68-stqh5" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-480663" has status "Ready":"False"
	I0116 03:13:16.393622 1011501 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-480663" in "kube-system" namespace to be "Ready" ...
	I0116 03:13:16.410313 1011501 pod_ready.go:97] node "embed-certs-480663" hosting pod "etcd-embed-certs-480663" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-480663" has status "Ready":"False"
	I0116 03:13:16.410355 1011501 pod_ready.go:81] duration metric: took 16.72056ms waiting for pod "etcd-embed-certs-480663" in "kube-system" namespace to be "Ready" ...
	E0116 03:13:16.410371 1011501 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-480663" hosting pod "etcd-embed-certs-480663" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-480663" has status "Ready":"False"
	I0116 03:13:16.410380 1011501 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-480663" in "kube-system" namespace to be "Ready" ...
	I0116 03:13:16.422777 1011501 pod_ready.go:97] node "embed-certs-480663" hosting pod "kube-apiserver-embed-certs-480663" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-480663" has status "Ready":"False"
	I0116 03:13:16.422819 1011501 pod_ready.go:81] duration metric: took 12.426537ms waiting for pod "kube-apiserver-embed-certs-480663" in "kube-system" namespace to be "Ready" ...
	E0116 03:13:16.422834 1011501 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-480663" hosting pod "kube-apiserver-embed-certs-480663" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-480663" has status "Ready":"False"
	I0116 03:13:16.422843 1011501 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-480663" in "kube-system" namespace to be "Ready" ...
	I0116 03:13:16.434722 1011501 pod_ready.go:97] node "embed-certs-480663" hosting pod "kube-controller-manager-embed-certs-480663" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-480663" has status "Ready":"False"
	I0116 03:13:16.434760 1011501 pod_ready.go:81] duration metric: took 11.904523ms waiting for pod "kube-controller-manager-embed-certs-480663" in "kube-system" namespace to be "Ready" ...
	E0116 03:13:16.434773 1011501 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-480663" hosting pod "kube-controller-manager-embed-certs-480663" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-480663" has status "Ready":"False"
	I0116 03:13:16.434783 1011501 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-j4786" in "kube-system" namespace to be "Ready" ...
	I0116 03:13:17.092534 1011501 pod_ready.go:97] node "embed-certs-480663" hosting pod "kube-proxy-j4786" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-480663" has status "Ready":"False"
	I0116 03:13:17.092568 1011501 pod_ready.go:81] duration metric: took 657.773691ms waiting for pod "kube-proxy-j4786" in "kube-system" namespace to be "Ready" ...
	E0116 03:13:17.092581 1011501 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-480663" hosting pod "kube-proxy-j4786" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-480663" has status "Ready":"False"
	I0116 03:13:17.092590 1011501 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-480663" in "kube-system" namespace to be "Ready" ...
	I0116 03:13:17.158257 1011501 pod_ready.go:97] node "embed-certs-480663" hosting pod "kube-scheduler-embed-certs-480663" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-480663" has status "Ready":"False"
	I0116 03:13:17.158294 1011501 pod_ready.go:81] duration metric: took 65.69466ms waiting for pod "kube-scheduler-embed-certs-480663" in "kube-system" namespace to be "Ready" ...
	E0116 03:13:17.158308 1011501 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-480663" hosting pod "kube-scheduler-embed-certs-480663" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-480663" has status "Ready":"False"
	I0116 03:13:17.158317 1011501 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace to be "Ready" ...
	I0116 03:13:17.872108 1011501 pod_ready.go:97] node "embed-certs-480663" hosting pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-480663" has status "Ready":"False"
	I0116 03:13:17.872149 1011501 pod_ready.go:81] duration metric: took 713.820621ms waiting for pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace to be "Ready" ...
	E0116 03:13:17.872162 1011501 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-480663" hosting pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-480663" has status "Ready":"False"
	I0116 03:13:17.872171 1011501 pod_ready.go:38] duration metric: took 1.528756103s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:13:17.872202 1011501 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0116 03:13:17.890580 1011501 ops.go:34] apiserver oom_adj: -16
	I0116 03:13:17.890613 1011501 kubeadm.go:640] restartCluster took 21.725155834s
	I0116 03:13:17.890626 1011501 kubeadm.go:406] StartCluster complete in 21.784958156s
	I0116 03:13:17.890693 1011501 settings.go:142] acquiring lock: {Name:mk39c69fb5454efd5afab021c02c3bec1f1b4e6b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:13:17.890792 1011501 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17967-971255/kubeconfig
	I0116 03:13:17.893858 1011501 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17967-971255/kubeconfig: {Name:mk5ca3018274b0a03599cf3631785c0629f81a67 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:13:18.133588 1011501 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0116 03:13:18.133712 1011501 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0116 03:13:18.133875 1011501 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-480663"
	I0116 03:13:18.133878 1011501 addons.go:69] Setting metrics-server=true in profile "embed-certs-480663"
	I0116 03:13:18.133911 1011501 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-480663"
	I0116 03:13:18.133906 1011501 addons.go:69] Setting default-storageclass=true in profile "embed-certs-480663"
	I0116 03:13:18.133920 1011501 addons.go:234] Setting addon metrics-server=true in "embed-certs-480663"
	W0116 03:13:18.133924 1011501 addons.go:243] addon storage-provisioner should already be in state true
	W0116 03:13:18.133932 1011501 addons.go:243] addon metrics-server should already be in state true
	I0116 03:13:18.133939 1011501 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-480663"
	I0116 03:13:18.133951 1011501 config.go:182] Loaded profile config "embed-certs-480663": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 03:13:18.133990 1011501 host.go:66] Checking if "embed-certs-480663" exists ...
	I0116 03:13:18.133990 1011501 host.go:66] Checking if "embed-certs-480663" exists ...
	I0116 03:13:18.134422 1011501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:13:18.134435 1011501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:13:18.134441 1011501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:13:18.134458 1011501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:13:18.134482 1011501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:13:18.134496 1011501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:13:18.152772 1011501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36843
	I0116 03:13:18.153335 1011501 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:13:18.153822 1011501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36539
	I0116 03:13:18.153952 1011501 main.go:141] libmachine: Using API Version  1
	I0116 03:13:18.153978 1011501 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:13:18.153953 1011501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44719
	I0116 03:13:18.154272 1011501 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:13:18.154435 1011501 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:13:18.154637 1011501 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:13:18.154836 1011501 main.go:141] libmachine: Using API Version  1
	I0116 03:13:18.154860 1011501 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:13:18.154956 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetState
	I0116 03:13:18.155092 1011501 main.go:141] libmachine: Using API Version  1
	I0116 03:13:18.155118 1011501 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:13:18.155183 1011501 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:13:18.155408 1011501 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:13:18.155884 1011501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:13:18.155939 1011501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:13:18.155953 1011501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:13:18.155985 1011501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:13:18.159097 1011501 addons.go:234] Setting addon default-storageclass=true in "embed-certs-480663"
	W0116 03:13:18.159139 1011501 addons.go:243] addon default-storageclass should already be in state true
	I0116 03:13:18.159175 1011501 host.go:66] Checking if "embed-certs-480663" exists ...
	I0116 03:13:18.159631 1011501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:13:18.159709 1011501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:13:18.176336 1011501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34615
	I0116 03:13:18.177044 1011501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35519
	I0116 03:13:18.177237 1011501 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:13:18.177646 1011501 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:13:18.177946 1011501 main.go:141] libmachine: Using API Version  1
	I0116 03:13:18.177971 1011501 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:13:18.178455 1011501 main.go:141] libmachine: Using API Version  1
	I0116 03:13:18.178505 1011501 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:13:18.178538 1011501 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:13:18.178951 1011501 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:13:18.178981 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetState
	I0116 03:13:18.179150 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetState
	I0116 03:13:18.179705 1011501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34121
	I0116 03:13:18.180094 1011501 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:13:18.180921 1011501 main.go:141] libmachine: Using API Version  1
	I0116 03:13:18.180934 1011501 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:13:18.181286 1011501 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:13:18.181902 1011501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:13:18.181925 1011501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:13:18.182091 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .DriverName
	I0116 03:13:18.182301 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .DriverName
	I0116 03:13:18.302482 1011501 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 03:13:18.202219 1011501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40999
	I0116 03:13:18.581432 1011501 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0116 03:13:18.581416 1011501 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 03:13:18.709000 1011501 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0116 03:13:18.582081 1011501 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:13:18.709096 1011501 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0116 03:13:18.709126 1011501 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0116 03:13:18.709154 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetSSHHostname
	I0116 03:13:18.586643 1011501 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-480663" context rescaled to 1 replicas
	I0116 03:13:18.709184 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetSSHHostname
	I0116 03:13:18.709223 1011501 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.150 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0116 03:13:18.588936 1011501 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0116 03:13:18.709955 1011501 main.go:141] libmachine: Using API Version  1
	I0116 03:13:18.713092 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | domain embed-certs-480663 has defined MAC address 52:54:00:1c:0e:bd in network mk-embed-certs-480663
	I0116 03:13:18.713501 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | domain embed-certs-480663 has defined MAC address 52:54:00:1c:0e:bd in network mk-embed-certs-480663
	I0116 03:13:18.713740 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetSSHPort
	I0116 03:13:18.714270 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetSSHPort
	I0116 03:13:18.722911 1011501 out.go:177] * Verifying Kubernetes components...
	I0116 03:13:18.722952 1011501 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:13:18.723026 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:0e:bd", ip: ""} in network mk-embed-certs-480663: {Iface:virbr1 ExpiryTime:2024-01-16 04:04:18 +0000 UTC Type:0 Mac:52:54:00:1c:0e:bd Iaid: IPaddr:192.168.61.150 Prefix:24 Hostname:embed-certs-480663 Clientid:01:52:54:00:1c:0e:bd}
	I0116 03:13:18.723078 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:0e:bd", ip: ""} in network mk-embed-certs-480663: {Iface:virbr1 ExpiryTime:2024-01-16 04:04:18 +0000 UTC Type:0 Mac:52:54:00:1c:0e:bd Iaid: IPaddr:192.168.61.150 Prefix:24 Hostname:embed-certs-480663 Clientid:01:52:54:00:1c:0e:bd}
	I0116 03:13:18.724877 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | domain embed-certs-480663 has defined IP address 192.168.61.150 and MAC address 52:54:00:1c:0e:bd in network mk-embed-certs-480663
	I0116 03:13:18.723318 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetSSHKeyPath
	I0116 03:13:18.724891 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | domain embed-certs-480663 has defined IP address 192.168.61.150 and MAC address 52:54:00:1c:0e:bd in network mk-embed-certs-480663
	I0116 03:13:18.723318 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetSSHKeyPath
	I0116 03:13:18.724748 1011501 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 03:13:18.725164 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetSSHUsername
	I0116 03:13:18.725165 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetSSHUsername
	I0116 03:13:18.725281 1011501 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:13:18.725333 1011501 sshutil.go:53] new ssh client: &{IP:192.168.61.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/embed-certs-480663/id_rsa Username:docker}
	I0116 03:13:18.725384 1011501 sshutil.go:53] new ssh client: &{IP:192.168.61.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/embed-certs-480663/id_rsa Username:docker}
	I0116 03:13:18.725507 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetState
	I0116 03:13:18.727468 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .DriverName
	I0116 03:13:18.727734 1011501 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0116 03:13:18.727754 1011501 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0116 03:13:18.727774 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetSSHHostname
	I0116 03:13:18.730959 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | domain embed-certs-480663 has defined MAC address 52:54:00:1c:0e:bd in network mk-embed-certs-480663
	I0116 03:13:18.731419 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:0e:bd", ip: ""} in network mk-embed-certs-480663: {Iface:virbr1 ExpiryTime:2024-01-16 04:04:18 +0000 UTC Type:0 Mac:52:54:00:1c:0e:bd Iaid: IPaddr:192.168.61.150 Prefix:24 Hostname:embed-certs-480663 Clientid:01:52:54:00:1c:0e:bd}
	I0116 03:13:18.731488 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | domain embed-certs-480663 has defined IP address 192.168.61.150 and MAC address 52:54:00:1c:0e:bd in network mk-embed-certs-480663
	I0116 03:13:18.731819 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetSSHPort
	I0116 03:13:18.732013 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetSSHKeyPath
	I0116 03:13:18.732162 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetSSHUsername
	I0116 03:13:18.732328 1011501 sshutil.go:53] new ssh client: &{IP:192.168.61.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/embed-certs-480663/id_rsa Username:docker}
	I0116 03:13:18.750255 1011501 node_ready.go:35] waiting up to 6m0s for node "embed-certs-480663" to be "Ready" ...
	I0116 03:13:16.997115 1011681 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.542741465s)
	I0116 03:13:16.997156 1011681 crio.go:451] Took 3.542892 seconds to extract the tarball
	I0116 03:13:16.997169 1011681 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0116 03:13:17.046929 1011681 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 03:13:17.098255 1011681 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0116 03:13:17.098280 1011681 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0116 03:13:17.098386 1011681 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 03:13:17.098392 1011681 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0116 03:13:17.098461 1011681 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0116 03:13:17.098503 1011681 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0116 03:13:17.098391 1011681 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0116 03:13:17.098621 1011681 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0116 03:13:17.098462 1011681 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0116 03:13:17.098390 1011681 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0116 03:13:17.100000 1011681 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0116 03:13:17.100009 1011681 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0116 03:13:17.100019 1011681 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0116 03:13:17.100039 1011681 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0116 03:13:17.100005 1011681 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 03:13:17.100438 1011681 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0116 03:13:17.100461 1011681 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0116 03:13:17.100666 1011681 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0116 03:13:17.256272 1011681 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0116 03:13:17.256286 1011681 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0116 03:13:17.258442 1011681 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0116 03:13:17.259457 1011681 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0116 03:13:17.264044 1011681 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0116 03:13:17.267216 1011681 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0116 03:13:17.274663 1011681 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0116 03:13:17.423339 1011681 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 03:13:17.423697 1011681 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0116 03:13:17.423773 1011681 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I0116 03:13:17.423813 1011681 ssh_runner.go:195] Run: which crictl
	I0116 03:13:17.460324 1011681 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0116 03:13:17.460382 1011681 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I0116 03:13:17.460441 1011681 ssh_runner.go:195] Run: which crictl
	I0116 03:13:17.483883 1011681 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0116 03:13:17.483936 1011681 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0116 03:13:17.483999 1011681 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0116 03:13:17.484066 1011681 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0116 03:13:17.484087 1011681 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0116 03:13:17.484104 1011681 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0116 03:13:17.484135 1011681 ssh_runner.go:195] Run: which crictl
	I0116 03:13:17.484007 1011681 ssh_runner.go:195] Run: which crictl
	I0116 03:13:17.484144 1011681 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0116 03:13:17.484142 1011681 ssh_runner.go:195] Run: which crictl
	I0116 03:13:17.484166 1011681 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0116 03:13:17.484211 1011681 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0116 03:13:17.484237 1011681 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0116 03:13:17.484284 1011681 ssh_runner.go:195] Run: which crictl
	I0116 03:13:17.484243 1011681 ssh_runner.go:195] Run: which crictl
	I0116 03:13:17.613454 1011681 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I0116 03:13:17.613555 1011681 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I0116 03:13:17.613587 1011681 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0116 03:13:17.613625 1011681 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I0116 03:13:17.613651 1011681 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0116 03:13:17.613689 1011681 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I0116 03:13:17.613759 1011681 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0116 03:13:17.776287 1011681 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17967-971255/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0116 03:13:17.787958 1011681 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17967-971255/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0116 03:13:17.788016 1011681 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17967-971255/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0116 03:13:17.788096 1011681 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17967-971255/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0116 03:13:17.791623 1011681 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17967-971255/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0116 03:13:17.791754 1011681 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17967-971255/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0116 03:13:17.791815 1011681 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17967-971255/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0116 03:13:17.791858 1011681 cache_images.go:92] LoadImages completed in 693.564709ms
	W0116 03:13:17.791955 1011681 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17967-971255/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1: no such file or directory
	I0116 03:13:17.792040 1011681 ssh_runner.go:195] Run: crio config
	I0116 03:13:17.851037 1011681 cni.go:84] Creating CNI manager for ""
	I0116 03:13:17.851066 1011681 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 03:13:17.851109 1011681 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0116 03:13:17.851136 1011681 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.91 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-788237 NodeName:old-k8s-version-788237 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.91"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.91 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0116 03:13:17.851281 1011681 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.91
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-788237"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.91
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.91"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-788237
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.39.91:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0116 03:13:17.851355 1011681 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-788237 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.91
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-788237 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0116 03:13:17.851419 1011681 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0116 03:13:17.861305 1011681 binaries.go:44] Found k8s binaries, skipping transfer
	I0116 03:13:17.861416 1011681 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0116 03:13:17.871242 1011681 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0116 03:13:17.891002 1011681 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0116 03:13:17.908934 1011681 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2177 bytes)
	I0116 03:13:17.928274 1011681 ssh_runner.go:195] Run: grep 192.168.39.91	control-plane.minikube.internal$ /etc/hosts
	I0116 03:13:17.932258 1011681 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.91	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 03:13:17.947070 1011681 certs.go:56] Setting up /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/old-k8s-version-788237 for IP: 192.168.39.91
	I0116 03:13:17.947119 1011681 certs.go:190] acquiring lock for shared ca certs: {Name:mk7c5920652340a030ac1e36e0fe21cc8437c491 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:13:17.947316 1011681 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17967-971255/.minikube/ca.key
	I0116 03:13:17.947374 1011681 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17967-971255/.minikube/proxy-client-ca.key
	I0116 03:13:17.947476 1011681 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/old-k8s-version-788237/client.key
	I0116 03:13:18.133447 1011681 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/old-k8s-version-788237/apiserver.key.d2754551
	I0116 03:13:18.133566 1011681 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/old-k8s-version-788237/proxy-client.key
	I0116 03:13:18.133765 1011681 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/home/jenkins/minikube-integration/17967-971255/.minikube/certs/978482.pem (1338 bytes)
	W0116 03:13:18.133860 1011681 certs.go:433] ignoring /home/jenkins/minikube-integration/17967-971255/.minikube/certs/home/jenkins/minikube-integration/17967-971255/.minikube/certs/978482_empty.pem, impossibly tiny 0 bytes
	I0116 03:13:18.133884 1011681 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca-key.pem (1675 bytes)
	I0116 03:13:18.133951 1011681 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca.pem (1082 bytes)
	I0116 03:13:18.133988 1011681 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/home/jenkins/minikube-integration/17967-971255/.minikube/certs/cert.pem (1123 bytes)
	I0116 03:13:18.134018 1011681 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/home/jenkins/minikube-integration/17967-971255/.minikube/certs/key.pem (1675 bytes)
	I0116 03:13:18.134075 1011681 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-971255/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17967-971255/.minikube/files/etc/ssl/certs/9784822.pem (1708 bytes)
	I0116 03:13:18.135047 1011681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/old-k8s-version-788237/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0116 03:13:18.169653 1011681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/old-k8s-version-788237/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0116 03:13:18.203412 1011681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/old-k8s-version-788237/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0116 03:13:18.232247 1011681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/old-k8s-version-788237/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0116 03:13:18.264379 1011681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0116 03:13:18.293926 1011681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0116 03:13:18.320373 1011681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0116 03:13:18.345098 1011681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0116 03:13:18.375186 1011681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/certs/978482.pem --> /usr/share/ca-certificates/978482.pem (1338 bytes)
	I0116 03:13:18.400408 1011681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/files/etc/ssl/certs/9784822.pem --> /usr/share/ca-certificates/9784822.pem (1708 bytes)
	I0116 03:13:18.426138 1011681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0116 03:13:18.451943 1011681 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0116 03:13:18.470682 1011681 ssh_runner.go:195] Run: openssl version
	I0116 03:13:18.477291 1011681 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/978482.pem && ln -fs /usr/share/ca-certificates/978482.pem /etc/ssl/certs/978482.pem"
	I0116 03:13:18.487687 1011681 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/978482.pem
	I0116 03:13:18.492346 1011681 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 16 02:10 /usr/share/ca-certificates/978482.pem
	I0116 03:13:18.492438 1011681 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/978482.pem
	I0116 03:13:18.498376 1011681 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/978482.pem /etc/ssl/certs/51391683.0"
	I0116 03:13:18.509157 1011681 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9784822.pem && ln -fs /usr/share/ca-certificates/9784822.pem /etc/ssl/certs/9784822.pem"
	I0116 03:13:18.520433 1011681 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9784822.pem
	I0116 03:13:18.525633 1011681 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 16 02:10 /usr/share/ca-certificates/9784822.pem
	I0116 03:13:18.525708 1011681 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9784822.pem
	I0116 03:13:18.531567 1011681 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9784822.pem /etc/ssl/certs/3ec20f2e.0"
	I0116 03:13:18.542827 1011681 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0116 03:13:18.553440 1011681 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:13:18.558572 1011681 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 16 02:01 /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:13:18.558647 1011681 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:13:18.564459 1011681 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0116 03:13:18.575413 1011681 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0116 03:13:18.580317 1011681 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0116 03:13:18.589623 1011681 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0116 03:13:18.598327 1011681 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0116 03:13:18.604540 1011681 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0116 03:13:18.610538 1011681 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0116 03:13:18.616482 1011681 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0116 03:13:18.622438 1011681 kubeadm.go:404] StartCluster: {Name:old-k8s-version-788237 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-788237 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.91 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 03:13:18.622565 1011681 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0116 03:13:18.622638 1011681 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 03:13:18.662697 1011681 cri.go:89] found id: ""
	I0116 03:13:18.662794 1011681 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0116 03:13:18.673299 1011681 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0116 03:13:18.673328 1011681 kubeadm.go:636] restartCluster start
	I0116 03:13:18.673404 1011681 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0116 03:13:18.683191 1011681 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:18.684893 1011681 kubeconfig.go:92] found "old-k8s-version-788237" server: "https://192.168.39.91:8443"
	I0116 03:13:18.688339 1011681 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0116 03:13:18.699684 1011681 api_server.go:166] Checking apiserver status ...
	I0116 03:13:18.699763 1011681 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:18.714966 1011681 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:19.200230 1011681 api_server.go:166] Checking apiserver status ...
	I0116 03:13:19.200346 1011681 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:19.216711 1011681 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:19.699865 1011681 api_server.go:166] Checking apiserver status ...
	I0116 03:13:19.699968 1011681 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:19.717864 1011681 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:20.200734 1011681 api_server.go:166] Checking apiserver status ...
	I0116 03:13:20.200839 1011681 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:13:16.049953 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | domain default-k8s-diff-port-775571 has defined MAC address 52:54:00:4b:bc:45 in network mk-default-k8s-diff-port-775571
	I0116 03:13:16.050440 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | unable to find current IP address of domain default-k8s-diff-port-775571 in network mk-default-k8s-diff-port-775571
	I0116 03:13:16.050486 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | I0116 03:13:16.050405 1012757 retry.go:31] will retry after 1.677720431s: waiting for machine to come up
	I0116 03:13:17.729520 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | domain default-k8s-diff-port-775571 has defined MAC address 52:54:00:4b:bc:45 in network mk-default-k8s-diff-port-775571
	I0116 03:13:17.730062 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | unable to find current IP address of domain default-k8s-diff-port-775571 in network mk-default-k8s-diff-port-775571
	I0116 03:13:17.730098 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | I0116 03:13:17.729997 1012757 retry.go:31] will retry after 1.686395601s: waiting for machine to come up
	I0116 03:13:19.419165 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | domain default-k8s-diff-port-775571 has defined MAC address 52:54:00:4b:bc:45 in network mk-default-k8s-diff-port-775571
	I0116 03:13:19.419699 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | unable to find current IP address of domain default-k8s-diff-port-775571 in network mk-default-k8s-diff-port-775571
	I0116 03:13:19.419741 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | I0116 03:13:19.419628 1012757 retry.go:31] will retry after 2.679023059s: waiting for machine to come up
	I0116 03:13:18.844795 1011501 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0116 03:13:18.861175 1011501 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0116 03:13:18.964890 1011501 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0116 03:13:18.862657 1011501 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 03:13:19.005912 1011501 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0116 03:13:19.005941 1011501 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0116 03:13:19.047693 1011501 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0116 03:13:19.047734 1011501 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0116 03:13:19.101576 1011501 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0116 03:13:19.940514 1011501 main.go:141] libmachine: Making call to close driver server
	I0116 03:13:19.940549 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .Close
	I0116 03:13:19.940914 1011501 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:13:19.940941 1011501 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:13:19.940954 1011501 main.go:141] libmachine: Making call to close driver server
	I0116 03:13:19.940965 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .Close
	I0116 03:13:19.941288 1011501 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:13:19.941309 1011501 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:13:19.986987 1011501 main.go:141] libmachine: Making call to close driver server
	I0116 03:13:19.987020 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .Close
	I0116 03:13:19.987375 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | Closing plugin on server side
	I0116 03:13:19.989349 1011501 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:13:19.989375 1011501 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:13:20.550836 1011501 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.449206565s)
	I0116 03:13:20.550903 1011501 main.go:141] libmachine: Making call to close driver server
	I0116 03:13:20.550921 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .Close
	I0116 03:13:20.550961 1011501 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.585981109s)
	I0116 03:13:20.551004 1011501 main.go:141] libmachine: Making call to close driver server
	I0116 03:13:20.551020 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .Close
	I0116 03:13:20.551499 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | Closing plugin on server side
	I0116 03:13:20.551509 1011501 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:13:20.551519 1011501 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:13:20.551564 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | Closing plugin on server side
	I0116 03:13:20.551565 1011501 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:13:20.551604 1011501 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:13:20.551624 1011501 main.go:141] libmachine: Making call to close driver server
	I0116 03:13:20.551610 1011501 main.go:141] libmachine: Making call to close driver server
	I0116 03:13:20.551637 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .Close
	I0116 03:13:20.551654 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .Close
	I0116 03:13:20.551899 1011501 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:13:20.551918 1011501 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:13:20.551975 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | Closing plugin on server side
	I0116 03:13:20.552009 1011501 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:13:20.552027 1011501 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:13:20.552050 1011501 addons.go:470] Verifying addon metrics-server=true in "embed-certs-480663"
	I0116 03:13:20.555953 1011501 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0116 03:13:20.557383 1011501 addons.go:505] enable addons completed in 2.42368035s: enabled=[default-storageclass storage-provisioner metrics-server]
	I0116 03:13:20.756003 1011501 node_ready.go:58] node "embed-certs-480663" has status "Ready":"False"
	I0116 03:13:23.254943 1011501 node_ready.go:58] node "embed-certs-480663" has status "Ready":"False"
	W0116 03:13:20.218633 1011681 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:20.700343 1011681 api_server.go:166] Checking apiserver status ...
	I0116 03:13:20.700461 1011681 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:20.713613 1011681 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:21.200115 1011681 api_server.go:166] Checking apiserver status ...
	I0116 03:13:21.200232 1011681 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:21.214341 1011681 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:21.700520 1011681 api_server.go:166] Checking apiserver status ...
	I0116 03:13:21.700644 1011681 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:21.717190 1011681 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:22.200709 1011681 api_server.go:166] Checking apiserver status ...
	I0116 03:13:22.200870 1011681 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:22.217321 1011681 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:22.699859 1011681 api_server.go:166] Checking apiserver status ...
	I0116 03:13:22.699972 1011681 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:22.717201 1011681 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:23.200594 1011681 api_server.go:166] Checking apiserver status ...
	I0116 03:13:23.200713 1011681 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:23.217126 1011681 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:23.700769 1011681 api_server.go:166] Checking apiserver status ...
	I0116 03:13:23.700891 1011681 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:23.715639 1011681 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:24.200713 1011681 api_server.go:166] Checking apiserver status ...
	I0116 03:13:24.200800 1011681 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:24.216368 1011681 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:24.699816 1011681 api_server.go:166] Checking apiserver status ...
	I0116 03:13:24.699958 1011681 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:24.717041 1011681 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:25.200575 1011681 api_server.go:166] Checking apiserver status ...
	I0116 03:13:25.200673 1011681 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:13:22.100823 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | domain default-k8s-diff-port-775571 has defined MAC address 52:54:00:4b:bc:45 in network mk-default-k8s-diff-port-775571
	I0116 03:13:22.101280 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | unable to find current IP address of domain default-k8s-diff-port-775571 in network mk-default-k8s-diff-port-775571
	I0116 03:13:22.101336 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | I0116 03:13:22.101245 1012757 retry.go:31] will retry after 3.352897115s: waiting for machine to come up
	I0116 03:13:25.456363 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | domain default-k8s-diff-port-775571 has defined MAC address 52:54:00:4b:bc:45 in network mk-default-k8s-diff-port-775571
	I0116 03:13:25.456824 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | unable to find current IP address of domain default-k8s-diff-port-775571 in network mk-default-k8s-diff-port-775571
	I0116 03:13:25.456908 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | I0116 03:13:25.456819 1012757 retry.go:31] will retry after 4.541436356s: waiting for machine to come up
	I0116 03:13:24.754870 1011501 node_ready.go:49] node "embed-certs-480663" has status "Ready":"True"
	I0116 03:13:24.754900 1011501 node_ready.go:38] duration metric: took 6.00460635s waiting for node "embed-certs-480663" to be "Ready" ...
	I0116 03:13:24.754913 1011501 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:13:24.761593 1011501 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-stqh5" in "kube-system" namespace to be "Ready" ...
	I0116 03:13:24.769366 1011501 pod_ready.go:92] pod "coredns-5dd5756b68-stqh5" in "kube-system" namespace has status "Ready":"True"
	I0116 03:13:24.769394 1011501 pod_ready.go:81] duration metric: took 7.773298ms waiting for pod "coredns-5dd5756b68-stqh5" in "kube-system" namespace to be "Ready" ...
	I0116 03:13:24.769407 1011501 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-480663" in "kube-system" namespace to be "Ready" ...
	I0116 03:13:26.782066 1011501 pod_ready.go:92] pod "etcd-embed-certs-480663" in "kube-system" namespace has status "Ready":"True"
	I0116 03:13:26.782105 1011501 pod_ready.go:81] duration metric: took 2.012689692s waiting for pod "etcd-embed-certs-480663" in "kube-system" namespace to be "Ready" ...
	I0116 03:13:26.782119 1011501 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-480663" in "kube-system" namespace to be "Ready" ...
	I0116 03:13:26.792641 1011501 pod_ready.go:92] pod "kube-apiserver-embed-certs-480663" in "kube-system" namespace has status "Ready":"True"
	I0116 03:13:26.792674 1011501 pod_ready.go:81] duration metric: took 10.545313ms waiting for pod "kube-apiserver-embed-certs-480663" in "kube-system" namespace to be "Ready" ...
	I0116 03:13:26.792690 1011501 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-480663" in "kube-system" namespace to be "Ready" ...
	I0116 03:13:26.799734 1011501 pod_ready.go:92] pod "kube-controller-manager-embed-certs-480663" in "kube-system" namespace has status "Ready":"True"
	I0116 03:13:26.799756 1011501 pod_ready.go:81] duration metric: took 7.056918ms waiting for pod "kube-controller-manager-embed-certs-480663" in "kube-system" namespace to be "Ready" ...
	I0116 03:13:26.799765 1011501 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-j4786" in "kube-system" namespace to be "Ready" ...
	I0116 03:13:26.804888 1011501 pod_ready.go:92] pod "kube-proxy-j4786" in "kube-system" namespace has status "Ready":"True"
	I0116 03:13:26.804924 1011501 pod_ready.go:81] duration metric: took 5.151602ms waiting for pod "kube-proxy-j4786" in "kube-system" namespace to be "Ready" ...
	I0116 03:13:26.804937 1011501 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-480663" in "kube-system" namespace to be "Ready" ...
	I0116 03:13:27.954848 1011501 pod_ready.go:92] pod "kube-scheduler-embed-certs-480663" in "kube-system" namespace has status "Ready":"True"
	I0116 03:13:27.954889 1011501 pod_ready.go:81] duration metric: took 1.149940262s waiting for pod "kube-scheduler-embed-certs-480663" in "kube-system" namespace to be "Ready" ...
	I0116 03:13:27.954904 1011501 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace to be "Ready" ...
	W0116 03:13:25.214882 1011681 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:25.700375 1011681 api_server.go:166] Checking apiserver status ...
	I0116 03:13:25.700473 1011681 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:25.713971 1011681 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:26.200077 1011681 api_server.go:166] Checking apiserver status ...
	I0116 03:13:26.200184 1011681 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:26.212440 1011681 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:26.699761 1011681 api_server.go:166] Checking apiserver status ...
	I0116 03:13:26.699855 1011681 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:26.713769 1011681 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:27.200383 1011681 api_server.go:166] Checking apiserver status ...
	I0116 03:13:27.200476 1011681 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:27.212354 1011681 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:27.699854 1011681 api_server.go:166] Checking apiserver status ...
	I0116 03:13:27.699946 1011681 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:27.712542 1011681 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:28.200037 1011681 api_server.go:166] Checking apiserver status ...
	I0116 03:13:28.200144 1011681 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:28.212556 1011681 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:28.700313 1011681 api_server.go:166] Checking apiserver status ...
	I0116 03:13:28.700415 1011681 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:28.712681 1011681 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:28.712718 1011681 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0116 03:13:28.712759 1011681 kubeadm.go:1135] stopping kube-system containers ...
	I0116 03:13:28.712773 1011681 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0116 03:13:28.712840 1011681 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 03:13:28.764021 1011681 cri.go:89] found id: ""
	I0116 03:13:28.764122 1011681 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0116 03:13:28.780410 1011681 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0116 03:13:28.790517 1011681 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0116 03:13:28.790617 1011681 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0116 03:13:28.800491 1011681 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0116 03:13:28.800544 1011681 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:13:28.935606 1011681 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:13:29.805004 1011681 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:13:30.030241 1011681 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:13:30.123106 1011681 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:13:30.003874 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | domain default-k8s-diff-port-775571 has defined MAC address 52:54:00:4b:bc:45 in network mk-default-k8s-diff-port-775571
	I0116 03:13:30.004370 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Found IP for machine: 192.168.72.158
	I0116 03:13:30.004394 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Reserving static IP address...
	I0116 03:13:30.004424 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | domain default-k8s-diff-port-775571 has current primary IP address 192.168.72.158 and MAC address 52:54:00:4b:bc:45 in network mk-default-k8s-diff-port-775571
	I0116 03:13:30.004824 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-775571", mac: "52:54:00:4b:bc:45", ip: "192.168.72.158"} in network mk-default-k8s-diff-port-775571: {Iface:virbr4 ExpiryTime:2024-01-16 04:13:23 +0000 UTC Type:0 Mac:52:54:00:4b:bc:45 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-775571 Clientid:01:52:54:00:4b:bc:45}
	I0116 03:13:30.004853 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | skip adding static IP to network mk-default-k8s-diff-port-775571 - found existing host DHCP lease matching {name: "default-k8s-diff-port-775571", mac: "52:54:00:4b:bc:45", ip: "192.168.72.158"}
	I0116 03:13:30.004868 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Reserved static IP address: 192.168.72.158
	I0116 03:13:30.004888 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Waiting for SSH to be available...
	I0116 03:13:30.004901 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | Getting to WaitForSSH function...
	I0116 03:13:30.007176 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | domain default-k8s-diff-port-775571 has defined MAC address 52:54:00:4b:bc:45 in network mk-default-k8s-diff-port-775571
	I0116 03:13:30.007549 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:bc:45", ip: ""} in network mk-default-k8s-diff-port-775571: {Iface:virbr4 ExpiryTime:2024-01-16 04:13:23 +0000 UTC Type:0 Mac:52:54:00:4b:bc:45 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-775571 Clientid:01:52:54:00:4b:bc:45}
	I0116 03:13:30.007592 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | domain default-k8s-diff-port-775571 has defined IP address 192.168.72.158 and MAC address 52:54:00:4b:bc:45 in network mk-default-k8s-diff-port-775571
	I0116 03:13:30.007722 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | Using SSH client type: external
	I0116 03:13:30.007752 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | Using SSH private key: /home/jenkins/minikube-integration/17967-971255/.minikube/machines/default-k8s-diff-port-775571/id_rsa (-rw-------)
	I0116 03:13:30.007791 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.158 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17967-971255/.minikube/machines/default-k8s-diff-port-775571/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0116 03:13:30.007807 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | About to run SSH command:
	I0116 03:13:30.007822 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | exit 0
	I0116 03:13:30.105862 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | SSH cmd err, output: <nil>: 
	I0116 03:13:30.106241 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetConfigRaw
	I0116 03:13:30.107063 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetIP
	I0116 03:13:30.110265 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | domain default-k8s-diff-port-775571 has defined MAC address 52:54:00:4b:bc:45 in network mk-default-k8s-diff-port-775571
	I0116 03:13:30.110754 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:bc:45", ip: ""} in network mk-default-k8s-diff-port-775571: {Iface:virbr4 ExpiryTime:2024-01-16 04:13:23 +0000 UTC Type:0 Mac:52:54:00:4b:bc:45 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-775571 Clientid:01:52:54:00:4b:bc:45}
	I0116 03:13:30.110788 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | domain default-k8s-diff-port-775571 has defined IP address 192.168.72.158 and MAC address 52:54:00:4b:bc:45 in network mk-default-k8s-diff-port-775571
	I0116 03:13:30.111070 1011955 profile.go:148] Saving config to /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/default-k8s-diff-port-775571/config.json ...
	I0116 03:13:30.111270 1011955 machine.go:88] provisioning docker machine ...
	I0116 03:13:30.111289 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .DriverName
	I0116 03:13:30.111511 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetMachineName
	I0116 03:13:30.111751 1011955 buildroot.go:166] provisioning hostname "default-k8s-diff-port-775571"
	I0116 03:13:30.111781 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetMachineName
	I0116 03:13:30.111987 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetSSHHostname
	I0116 03:13:30.114629 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | domain default-k8s-diff-port-775571 has defined MAC address 52:54:00:4b:bc:45 in network mk-default-k8s-diff-port-775571
	I0116 03:13:30.115002 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:bc:45", ip: ""} in network mk-default-k8s-diff-port-775571: {Iface:virbr4 ExpiryTime:2024-01-16 04:13:23 +0000 UTC Type:0 Mac:52:54:00:4b:bc:45 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-775571 Clientid:01:52:54:00:4b:bc:45}
	I0116 03:13:30.115032 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | domain default-k8s-diff-port-775571 has defined IP address 192.168.72.158 and MAC address 52:54:00:4b:bc:45 in network mk-default-k8s-diff-port-775571
	I0116 03:13:30.115205 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetSSHPort
	I0116 03:13:30.115375 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetSSHKeyPath
	I0116 03:13:30.115551 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetSSHKeyPath
	I0116 03:13:30.115706 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetSSHUsername
	I0116 03:13:30.115886 1011955 main.go:141] libmachine: Using SSH client type: native
	I0116 03:13:30.116340 1011955 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.72.158 22 <nil> <nil>}
	I0116 03:13:30.116363 1011955 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-775571 && echo "default-k8s-diff-port-775571" | sudo tee /etc/hostname
	I0116 03:13:30.260423 1011955 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-775571
	
	I0116 03:13:30.260451 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetSSHHostname
	I0116 03:13:30.263641 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | domain default-k8s-diff-port-775571 has defined MAC address 52:54:00:4b:bc:45 in network mk-default-k8s-diff-port-775571
	I0116 03:13:30.264075 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:bc:45", ip: ""} in network mk-default-k8s-diff-port-775571: {Iface:virbr4 ExpiryTime:2024-01-16 04:13:23 +0000 UTC Type:0 Mac:52:54:00:4b:bc:45 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-775571 Clientid:01:52:54:00:4b:bc:45}
	I0116 03:13:30.264117 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | domain default-k8s-diff-port-775571 has defined IP address 192.168.72.158 and MAC address 52:54:00:4b:bc:45 in network mk-default-k8s-diff-port-775571
	I0116 03:13:30.264539 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetSSHPort
	I0116 03:13:30.264776 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetSSHKeyPath
	I0116 03:13:30.264987 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetSSHKeyPath
	I0116 03:13:30.265162 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetSSHUsername
	I0116 03:13:30.265379 1011955 main.go:141] libmachine: Using SSH client type: native
	I0116 03:13:30.265894 1011955 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.72.158 22 <nil> <nil>}
	I0116 03:13:30.265929 1011955 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-775571' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-775571/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-775571' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0116 03:13:30.404028 1011955 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0116 03:13:30.404070 1011955 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17967-971255/.minikube CaCertPath:/home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17967-971255/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17967-971255/.minikube}
	I0116 03:13:30.404131 1011955 buildroot.go:174] setting up certificates
	I0116 03:13:30.404147 1011955 provision.go:83] configureAuth start
	I0116 03:13:30.404167 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetMachineName
	I0116 03:13:30.404539 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetIP
	I0116 03:13:30.407588 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | domain default-k8s-diff-port-775571 has defined MAC address 52:54:00:4b:bc:45 in network mk-default-k8s-diff-port-775571
	I0116 03:13:30.408002 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:bc:45", ip: ""} in network mk-default-k8s-diff-port-775571: {Iface:virbr4 ExpiryTime:2024-01-16 04:13:23 +0000 UTC Type:0 Mac:52:54:00:4b:bc:45 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-775571 Clientid:01:52:54:00:4b:bc:45}
	I0116 03:13:30.408036 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | domain default-k8s-diff-port-775571 has defined IP address 192.168.72.158 and MAC address 52:54:00:4b:bc:45 in network mk-default-k8s-diff-port-775571
	I0116 03:13:30.408229 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetSSHHostname
	I0116 03:13:30.410911 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | domain default-k8s-diff-port-775571 has defined MAC address 52:54:00:4b:bc:45 in network mk-default-k8s-diff-port-775571
	I0116 03:13:30.411309 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:bc:45", ip: ""} in network mk-default-k8s-diff-port-775571: {Iface:virbr4 ExpiryTime:2024-01-16 04:13:23 +0000 UTC Type:0 Mac:52:54:00:4b:bc:45 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-775571 Clientid:01:52:54:00:4b:bc:45}
	I0116 03:13:30.411362 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | domain default-k8s-diff-port-775571 has defined IP address 192.168.72.158 and MAC address 52:54:00:4b:bc:45 in network mk-default-k8s-diff-port-775571
	I0116 03:13:30.411463 1011955 provision.go:138] copyHostCerts
	I0116 03:13:30.411550 1011955 exec_runner.go:144] found /home/jenkins/minikube-integration/17967-971255/.minikube/ca.pem, removing ...
	I0116 03:13:30.411564 1011955 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17967-971255/.minikube/ca.pem
	I0116 03:13:30.411637 1011955 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17967-971255/.minikube/ca.pem (1082 bytes)
	I0116 03:13:30.411760 1011955 exec_runner.go:144] found /home/jenkins/minikube-integration/17967-971255/.minikube/cert.pem, removing ...
	I0116 03:13:30.411768 1011955 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17967-971255/.minikube/cert.pem
	I0116 03:13:30.411800 1011955 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17967-971255/.minikube/cert.pem (1123 bytes)
	I0116 03:13:30.411878 1011955 exec_runner.go:144] found /home/jenkins/minikube-integration/17967-971255/.minikube/key.pem, removing ...
	I0116 03:13:30.411891 1011955 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17967-971255/.minikube/key.pem
	I0116 03:13:30.411920 1011955 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17967-971255/.minikube/key.pem (1675 bytes)
	I0116 03:13:30.411983 1011955 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17967-971255/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-775571 san=[192.168.72.158 192.168.72.158 localhost 127.0.0.1 minikube default-k8s-diff-port-775571]
	I0116 03:13:30.478444 1011955 provision.go:172] copyRemoteCerts
	I0116 03:13:30.478520 1011955 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0116 03:13:30.478551 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetSSHHostname
	I0116 03:13:30.481824 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | domain default-k8s-diff-port-775571 has defined MAC address 52:54:00:4b:bc:45 in network mk-default-k8s-diff-port-775571
	I0116 03:13:30.482200 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:bc:45", ip: ""} in network mk-default-k8s-diff-port-775571: {Iface:virbr4 ExpiryTime:2024-01-16 04:13:23 +0000 UTC Type:0 Mac:52:54:00:4b:bc:45 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-775571 Clientid:01:52:54:00:4b:bc:45}
	I0116 03:13:30.482239 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | domain default-k8s-diff-port-775571 has defined IP address 192.168.72.158 and MAC address 52:54:00:4b:bc:45 in network mk-default-k8s-diff-port-775571
	I0116 03:13:30.482469 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetSSHPort
	I0116 03:13:30.482663 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetSSHKeyPath
	I0116 03:13:30.482870 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetSSHUsername
	I0116 03:13:30.483070 1011955 sshutil.go:53] new ssh client: &{IP:192.168.72.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/default-k8s-diff-port-775571/id_rsa Username:docker}
	I0116 03:13:31.280327 1011460 start.go:369] acquired machines lock for "no-preload-934668" in 56.48409901s
	I0116 03:13:31.280456 1011460 start.go:96] Skipping create...Using existing machine configuration
	I0116 03:13:31.280473 1011460 fix.go:54] fixHost starting: 
	I0116 03:13:31.280948 1011460 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:13:31.280986 1011460 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:13:31.302076 1011460 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39289
	I0116 03:13:31.302631 1011460 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:13:31.303270 1011460 main.go:141] libmachine: Using API Version  1
	I0116 03:13:31.303299 1011460 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:13:31.303700 1011460 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:13:31.304127 1011460 main.go:141] libmachine: (no-preload-934668) Calling .DriverName
	I0116 03:13:31.304681 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetState
	I0116 03:13:31.307845 1011460 fix.go:102] recreateIfNeeded on no-preload-934668: state=Stopped err=<nil>
	I0116 03:13:31.307882 1011460 main.go:141] libmachine: (no-preload-934668) Calling .DriverName
	W0116 03:13:31.308092 1011460 fix.go:128] unexpected machine state, will restart: <nil>
	I0116 03:13:31.310208 1011460 out.go:177] * Restarting existing kvm2 VM for "no-preload-934668" ...
	I0116 03:13:31.311591 1011460 main.go:141] libmachine: (no-preload-934668) Calling .Start
	I0116 03:13:31.311829 1011460 main.go:141] libmachine: (no-preload-934668) Ensuring networks are active...
	I0116 03:13:31.312840 1011460 main.go:141] libmachine: (no-preload-934668) Ensuring network default is active
	I0116 03:13:31.313302 1011460 main.go:141] libmachine: (no-preload-934668) Ensuring network mk-no-preload-934668 is active
	I0116 03:13:31.313756 1011460 main.go:141] libmachine: (no-preload-934668) Getting domain xml...
	I0116 03:13:31.314627 1011460 main.go:141] libmachine: (no-preload-934668) Creating domain...
	I0116 03:13:30.580435 1011955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0116 03:13:30.604188 1011955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0116 03:13:30.627877 1011955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0116 03:13:30.651737 1011955 provision.go:86] duration metric: configureAuth took 247.572907ms
	I0116 03:13:30.651768 1011955 buildroot.go:189] setting minikube options for container-runtime
	I0116 03:13:30.651949 1011955 config.go:182] Loaded profile config "default-k8s-diff-port-775571": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 03:13:30.652040 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetSSHHostname
	I0116 03:13:30.654855 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | domain default-k8s-diff-port-775571 has defined MAC address 52:54:00:4b:bc:45 in network mk-default-k8s-diff-port-775571
	I0116 03:13:30.655180 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:bc:45", ip: ""} in network mk-default-k8s-diff-port-775571: {Iface:virbr4 ExpiryTime:2024-01-16 04:13:23 +0000 UTC Type:0 Mac:52:54:00:4b:bc:45 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-775571 Clientid:01:52:54:00:4b:bc:45}
	I0116 03:13:30.655224 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | domain default-k8s-diff-port-775571 has defined IP address 192.168.72.158 and MAC address 52:54:00:4b:bc:45 in network mk-default-k8s-diff-port-775571
	I0116 03:13:30.655395 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetSSHPort
	I0116 03:13:30.655676 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetSSHKeyPath
	I0116 03:13:30.655874 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetSSHKeyPath
	I0116 03:13:30.656047 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetSSHUsername
	I0116 03:13:30.656231 1011955 main.go:141] libmachine: Using SSH client type: native
	I0116 03:13:30.656542 1011955 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.72.158 22 <nil> <nil>}
	I0116 03:13:30.656562 1011955 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0116 03:13:30.996593 1011955 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0116 03:13:30.996632 1011955 machine.go:91] provisioned docker machine in 885.348285ms
	I0116 03:13:30.996650 1011955 start.go:300] post-start starting for "default-k8s-diff-port-775571" (driver="kvm2")
	I0116 03:13:30.996669 1011955 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0116 03:13:30.996697 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .DriverName
	I0116 03:13:30.997187 1011955 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0116 03:13:30.997222 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetSSHHostname
	I0116 03:13:31.000071 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | domain default-k8s-diff-port-775571 has defined MAC address 52:54:00:4b:bc:45 in network mk-default-k8s-diff-port-775571
	I0116 03:13:31.000460 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:bc:45", ip: ""} in network mk-default-k8s-diff-port-775571: {Iface:virbr4 ExpiryTime:2024-01-16 04:13:23 +0000 UTC Type:0 Mac:52:54:00:4b:bc:45 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-775571 Clientid:01:52:54:00:4b:bc:45}
	I0116 03:13:31.000498 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | domain default-k8s-diff-port-775571 has defined IP address 192.168.72.158 and MAC address 52:54:00:4b:bc:45 in network mk-default-k8s-diff-port-775571
	I0116 03:13:31.000666 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetSSHPort
	I0116 03:13:31.000867 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetSSHKeyPath
	I0116 03:13:31.001030 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetSSHUsername
	I0116 03:13:31.001215 1011955 sshutil.go:53] new ssh client: &{IP:192.168.72.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/default-k8s-diff-port-775571/id_rsa Username:docker}
	I0116 03:13:31.102897 1011955 ssh_runner.go:195] Run: cat /etc/os-release
	I0116 03:13:31.107910 1011955 info.go:137] Remote host: Buildroot 2021.02.12
	I0116 03:13:31.107939 1011955 filesync.go:126] Scanning /home/jenkins/minikube-integration/17967-971255/.minikube/addons for local assets ...
	I0116 03:13:31.108003 1011955 filesync.go:126] Scanning /home/jenkins/minikube-integration/17967-971255/.minikube/files for local assets ...
	I0116 03:13:31.108076 1011955 filesync.go:149] local asset: /home/jenkins/minikube-integration/17967-971255/.minikube/files/etc/ssl/certs/9784822.pem -> 9784822.pem in /etc/ssl/certs
	I0116 03:13:31.108165 1011955 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0116 03:13:31.118591 1011955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/files/etc/ssl/certs/9784822.pem --> /etc/ssl/certs/9784822.pem (1708 bytes)
	I0116 03:13:31.144536 1011955 start.go:303] post-start completed in 147.864906ms
	I0116 03:13:31.144581 1011955 fix.go:56] fixHost completed within 21.109302207s
	I0116 03:13:31.144609 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetSSHHostname
	I0116 03:13:31.147887 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | domain default-k8s-diff-port-775571 has defined MAC address 52:54:00:4b:bc:45 in network mk-default-k8s-diff-port-775571
	I0116 03:13:31.148261 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:bc:45", ip: ""} in network mk-default-k8s-diff-port-775571: {Iface:virbr4 ExpiryTime:2024-01-16 04:13:23 +0000 UTC Type:0 Mac:52:54:00:4b:bc:45 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-775571 Clientid:01:52:54:00:4b:bc:45}
	I0116 03:13:31.148300 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | domain default-k8s-diff-port-775571 has defined IP address 192.168.72.158 and MAC address 52:54:00:4b:bc:45 in network mk-default-k8s-diff-port-775571
	I0116 03:13:31.148487 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetSSHPort
	I0116 03:13:31.148765 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetSSHKeyPath
	I0116 03:13:31.148980 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetSSHKeyPath
	I0116 03:13:31.149195 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetSSHUsername
	I0116 03:13:31.149426 1011955 main.go:141] libmachine: Using SSH client type: native
	I0116 03:13:31.149818 1011955 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.72.158 22 <nil> <nil>}
	I0116 03:13:31.149838 1011955 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0116 03:13:31.280175 1011955 main.go:141] libmachine: SSH cmd err, output: <nil>: 1705374811.251760286
	
	I0116 03:13:31.280203 1011955 fix.go:206] guest clock: 1705374811.251760286
	I0116 03:13:31.280210 1011955 fix.go:219] Guest: 2024-01-16 03:13:31.251760286 +0000 UTC Remote: 2024-01-16 03:13:31.144586974 +0000 UTC m=+275.673207404 (delta=107.173312ms)
	I0116 03:13:31.280231 1011955 fix.go:190] guest clock delta is within tolerance: 107.173312ms
	I0116 03:13:31.280242 1011955 start.go:83] releasing machines lock for "default-k8s-diff-port-775571", held for 21.244993059s
	I0116 03:13:31.280274 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .DriverName
	I0116 03:13:31.280606 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetIP
	I0116 03:13:31.284082 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | domain default-k8s-diff-port-775571 has defined MAC address 52:54:00:4b:bc:45 in network mk-default-k8s-diff-port-775571
	I0116 03:13:31.284580 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:bc:45", ip: ""} in network mk-default-k8s-diff-port-775571: {Iface:virbr4 ExpiryTime:2024-01-16 04:13:23 +0000 UTC Type:0 Mac:52:54:00:4b:bc:45 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-775571 Clientid:01:52:54:00:4b:bc:45}
	I0116 03:13:31.284627 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | domain default-k8s-diff-port-775571 has defined IP address 192.168.72.158 and MAC address 52:54:00:4b:bc:45 in network mk-default-k8s-diff-port-775571
	I0116 03:13:31.284960 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .DriverName
	I0116 03:13:31.285552 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .DriverName
	I0116 03:13:31.285784 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .DriverName
	I0116 03:13:31.285894 1011955 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0116 03:13:31.285954 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetSSHHostname
	I0116 03:13:31.286062 1011955 ssh_runner.go:195] Run: cat /version.json
	I0116 03:13:31.286081 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetSSHHostname
	I0116 03:13:31.289112 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | domain default-k8s-diff-port-775571 has defined MAC address 52:54:00:4b:bc:45 in network mk-default-k8s-diff-port-775571
	I0116 03:13:31.289486 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | domain default-k8s-diff-port-775571 has defined MAC address 52:54:00:4b:bc:45 in network mk-default-k8s-diff-port-775571
	I0116 03:13:31.289541 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:bc:45", ip: ""} in network mk-default-k8s-diff-port-775571: {Iface:virbr4 ExpiryTime:2024-01-16 04:13:23 +0000 UTC Type:0 Mac:52:54:00:4b:bc:45 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-775571 Clientid:01:52:54:00:4b:bc:45}
	I0116 03:13:31.289565 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | domain default-k8s-diff-port-775571 has defined IP address 192.168.72.158 and MAC address 52:54:00:4b:bc:45 in network mk-default-k8s-diff-port-775571
	I0116 03:13:31.289700 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetSSHPort
	I0116 03:13:31.289942 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:bc:45", ip: ""} in network mk-default-k8s-diff-port-775571: {Iface:virbr4 ExpiryTime:2024-01-16 04:13:23 +0000 UTC Type:0 Mac:52:54:00:4b:bc:45 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-775571 Clientid:01:52:54:00:4b:bc:45}
	I0116 03:13:31.289959 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetSSHKeyPath
	I0116 03:13:31.289969 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | domain default-k8s-diff-port-775571 has defined IP address 192.168.72.158 and MAC address 52:54:00:4b:bc:45 in network mk-default-k8s-diff-port-775571
	I0116 03:13:31.290169 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetSSHUsername
	I0116 03:13:31.290251 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetSSHPort
	I0116 03:13:31.290334 1011955 sshutil.go:53] new ssh client: &{IP:192.168.72.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/default-k8s-diff-port-775571/id_rsa Username:docker}
	I0116 03:13:31.290487 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetSSHKeyPath
	I0116 03:13:31.290643 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetSSHUsername
	I0116 03:13:31.290787 1011955 sshutil.go:53] new ssh client: &{IP:192.168.72.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/default-k8s-diff-port-775571/id_rsa Username:docker}
	I0116 03:13:31.412666 1011955 ssh_runner.go:195] Run: systemctl --version
	I0116 03:13:31.420934 1011955 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0116 03:13:31.571465 1011955 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0116 03:13:31.580180 1011955 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0116 03:13:31.580312 1011955 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0116 03:13:31.601148 1011955 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0116 03:13:31.601187 1011955 start.go:475] detecting cgroup driver to use...
	I0116 03:13:31.601274 1011955 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0116 03:13:31.622197 1011955 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0116 03:13:31.637047 1011955 docker.go:217] disabling cri-docker service (if available) ...
	I0116 03:13:31.637146 1011955 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0116 03:13:31.655781 1011955 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0116 03:13:31.678925 1011955 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0116 03:13:31.827298 1011955 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0116 03:13:31.973784 1011955 docker.go:233] disabling docker service ...
	I0116 03:13:31.973890 1011955 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0116 03:13:32.003399 1011955 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0116 03:13:32.022537 1011955 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0116 03:13:32.201640 1011955 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0116 03:13:32.336251 1011955 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0116 03:13:32.352402 1011955 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0116 03:13:32.376724 1011955 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0116 03:13:32.376796 1011955 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:13:32.387636 1011955 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0116 03:13:32.387721 1011955 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:13:32.399288 1011955 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:13:32.411777 1011955 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:13:32.425137 1011955 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0116 03:13:32.438308 1011955 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0116 03:13:32.451165 1011955 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0116 03:13:32.451246 1011955 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0116 03:13:32.467922 1011955 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0116 03:13:32.479144 1011955 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0116 03:13:32.651975 1011955 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0116 03:13:32.857869 1011955 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0116 03:13:32.857953 1011955 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0116 03:13:32.863869 1011955 start.go:543] Will wait 60s for crictl version
	I0116 03:13:32.863957 1011955 ssh_runner.go:195] Run: which crictl
	I0116 03:13:32.868179 1011955 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0116 03:13:32.917020 1011955 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0116 03:13:32.917111 1011955 ssh_runner.go:195] Run: crio --version
	I0116 03:13:32.970563 1011955 ssh_runner.go:195] Run: crio --version
	I0116 03:13:33.027800 1011955 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0116 03:13:29.966940 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:13:32.466746 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:13:30.212501 1011681 api_server.go:52] waiting for apiserver process to appear ...
	I0116 03:13:30.212577 1011681 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:13:30.712756 1011681 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:13:31.212694 1011681 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:13:31.713596 1011681 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:13:32.212767 1011681 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:13:32.258055 1011681 api_server.go:72] duration metric: took 2.045552104s to wait for apiserver process to appear ...
	I0116 03:13:32.258091 1011681 api_server.go:88] waiting for apiserver healthz status ...
	I0116 03:13:32.258118 1011681 api_server.go:253] Checking apiserver healthz at https://192.168.39.91:8443/healthz ...
	I0116 03:13:32.258807 1011681 api_server.go:269] stopped: https://192.168.39.91:8443/healthz: Get "https://192.168.39.91:8443/healthz": dial tcp 192.168.39.91:8443: connect: connection refused
	I0116 03:13:32.758305 1011681 api_server.go:253] Checking apiserver healthz at https://192.168.39.91:8443/healthz ...
	I0116 03:13:33.029157 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetIP
	I0116 03:13:33.032430 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | domain default-k8s-diff-port-775571 has defined MAC address 52:54:00:4b:bc:45 in network mk-default-k8s-diff-port-775571
	I0116 03:13:33.032824 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:bc:45", ip: ""} in network mk-default-k8s-diff-port-775571: {Iface:virbr4 ExpiryTime:2024-01-16 04:13:23 +0000 UTC Type:0 Mac:52:54:00:4b:bc:45 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-775571 Clientid:01:52:54:00:4b:bc:45}
	I0116 03:13:33.032860 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | domain default-k8s-diff-port-775571 has defined IP address 192.168.72.158 and MAC address 52:54:00:4b:bc:45 in network mk-default-k8s-diff-port-775571
	I0116 03:13:33.033077 1011955 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0116 03:13:33.037500 1011955 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 03:13:33.050478 1011955 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0116 03:13:33.050573 1011955 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 03:13:33.096041 1011955 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0116 03:13:33.096133 1011955 ssh_runner.go:195] Run: which lz4
	I0116 03:13:33.100546 1011955 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0116 03:13:33.105198 1011955 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0116 03:13:33.105234 1011955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0116 03:13:35.104728 1011955 crio.go:444] Took 2.004229 seconds to copy over tarball
	I0116 03:13:35.104817 1011955 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0116 03:13:32.655911 1011460 main.go:141] libmachine: (no-preload-934668) Waiting to get IP...
	I0116 03:13:32.657029 1011460 main.go:141] libmachine: (no-preload-934668) DBG | domain no-preload-934668 has defined MAC address 52:54:00:96:89:86 in network mk-no-preload-934668
	I0116 03:13:32.657609 1011460 main.go:141] libmachine: (no-preload-934668) DBG | unable to find current IP address of domain no-preload-934668 in network mk-no-preload-934668
	I0116 03:13:32.657728 1011460 main.go:141] libmachine: (no-preload-934668) DBG | I0116 03:13:32.657598 1012976 retry.go:31] will retry after 271.069608ms: waiting for machine to come up
	I0116 03:13:32.930214 1011460 main.go:141] libmachine: (no-preload-934668) DBG | domain no-preload-934668 has defined MAC address 52:54:00:96:89:86 in network mk-no-preload-934668
	I0116 03:13:32.930725 1011460 main.go:141] libmachine: (no-preload-934668) DBG | unable to find current IP address of domain no-preload-934668 in network mk-no-preload-934668
	I0116 03:13:32.930856 1011460 main.go:141] libmachine: (no-preload-934668) DBG | I0116 03:13:32.930775 1012976 retry.go:31] will retry after 377.793601ms: waiting for machine to come up
	I0116 03:13:33.310351 1011460 main.go:141] libmachine: (no-preload-934668) DBG | domain no-preload-934668 has defined MAC address 52:54:00:96:89:86 in network mk-no-preload-934668
	I0116 03:13:33.310835 1011460 main.go:141] libmachine: (no-preload-934668) DBG | unable to find current IP address of domain no-preload-934668 in network mk-no-preload-934668
	I0116 03:13:33.310897 1011460 main.go:141] libmachine: (no-preload-934668) DBG | I0116 03:13:33.310781 1012976 retry.go:31] will retry after 416.26092ms: waiting for machine to come up
	I0116 03:13:33.728484 1011460 main.go:141] libmachine: (no-preload-934668) DBG | domain no-preload-934668 has defined MAC address 52:54:00:96:89:86 in network mk-no-preload-934668
	I0116 03:13:33.729148 1011460 main.go:141] libmachine: (no-preload-934668) DBG | unable to find current IP address of domain no-preload-934668 in network mk-no-preload-934668
	I0116 03:13:33.729189 1011460 main.go:141] libmachine: (no-preload-934668) DBG | I0116 03:13:33.729011 1012976 retry.go:31] will retry after 608.181162ms: waiting for machine to come up
	I0116 03:13:34.339151 1011460 main.go:141] libmachine: (no-preload-934668) DBG | domain no-preload-934668 has defined MAC address 52:54:00:96:89:86 in network mk-no-preload-934668
	I0116 03:13:34.339614 1011460 main.go:141] libmachine: (no-preload-934668) DBG | unable to find current IP address of domain no-preload-934668 in network mk-no-preload-934668
	I0116 03:13:34.339642 1011460 main.go:141] libmachine: (no-preload-934668) DBG | I0116 03:13:34.339539 1012976 retry.go:31] will retry after 750.260968ms: waiting for machine to come up
	I0116 03:13:35.090870 1011460 main.go:141] libmachine: (no-preload-934668) DBG | domain no-preload-934668 has defined MAC address 52:54:00:96:89:86 in network mk-no-preload-934668
	I0116 03:13:35.091333 1011460 main.go:141] libmachine: (no-preload-934668) DBG | unable to find current IP address of domain no-preload-934668 in network mk-no-preload-934668
	I0116 03:13:35.091362 1011460 main.go:141] libmachine: (no-preload-934668) DBG | I0116 03:13:35.091285 1012976 retry.go:31] will retry after 700.212947ms: waiting for machine to come up
	I0116 03:13:35.793243 1011460 main.go:141] libmachine: (no-preload-934668) DBG | domain no-preload-934668 has defined MAC address 52:54:00:96:89:86 in network mk-no-preload-934668
	I0116 03:13:35.793740 1011460 main.go:141] libmachine: (no-preload-934668) DBG | unable to find current IP address of domain no-preload-934668 in network mk-no-preload-934668
	I0116 03:13:35.793774 1011460 main.go:141] libmachine: (no-preload-934668) DBG | I0116 03:13:35.793633 1012976 retry.go:31] will retry after 743.854004ms: waiting for machine to come up
	I0116 03:13:36.539322 1011460 main.go:141] libmachine: (no-preload-934668) DBG | domain no-preload-934668 has defined MAC address 52:54:00:96:89:86 in network mk-no-preload-934668
	I0116 03:13:36.539985 1011460 main.go:141] libmachine: (no-preload-934668) DBG | unable to find current IP address of domain no-preload-934668 in network mk-no-preload-934668
	I0116 03:13:36.540018 1011460 main.go:141] libmachine: (no-preload-934668) DBG | I0116 03:13:36.539939 1012976 retry.go:31] will retry after 1.305141922s: waiting for machine to come up
	I0116 03:13:34.974062 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:13:37.464767 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:13:37.759482 1011681 api_server.go:269] stopped: https://192.168.39.91:8443/healthz: Get "https://192.168.39.91:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0116 03:13:37.759559 1011681 api_server.go:253] Checking apiserver healthz at https://192.168.39.91:8443/healthz ...
	I0116 03:13:39.188258 1011681 api_server.go:279] https://192.168.39.91:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0116 03:13:39.188300 1011681 api_server.go:103] status: https://192.168.39.91:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0116 03:13:39.188322 1011681 api_server.go:253] Checking apiserver healthz at https://192.168.39.91:8443/healthz ...
	I0116 03:13:39.222005 1011681 api_server.go:279] https://192.168.39.91:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0116 03:13:39.222064 1011681 api_server.go:103] status: https://192.168.39.91:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0116 03:13:39.259251 1011681 api_server.go:253] Checking apiserver healthz at https://192.168.39.91:8443/healthz ...
	I0116 03:13:39.360385 1011681 api_server.go:279] https://192.168.39.91:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0116 03:13:39.360456 1011681 api_server.go:103] status: https://192.168.39.91:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0116 03:13:39.759006 1011681 api_server.go:253] Checking apiserver healthz at https://192.168.39.91:8443/healthz ...
	I0116 03:13:38.432521 1011955 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.327659635s)
	I0116 03:13:38.432570 1011955 crio.go:451] Took 3.327807 seconds to extract the tarball
	I0116 03:13:38.432585 1011955 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0116 03:13:38.477872 1011955 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 03:13:38.535414 1011955 crio.go:496] all images are preloaded for cri-o runtime.
	I0116 03:13:38.535442 1011955 cache_images.go:84] Images are preloaded, skipping loading
	I0116 03:13:38.535510 1011955 ssh_runner.go:195] Run: crio config
	I0116 03:13:38.604605 1011955 cni.go:84] Creating CNI manager for ""
	I0116 03:13:38.604636 1011955 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 03:13:38.604663 1011955 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0116 03:13:38.604690 1011955 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.158 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-775571 NodeName:default-k8s-diff-port-775571 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.158"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.158 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0116 03:13:38.604871 1011955 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.158
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-775571"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.158
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.158"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0116 03:13:38.604946 1011955 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-775571 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.158
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-775571 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0116 03:13:38.605006 1011955 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0116 03:13:38.619020 1011955 binaries.go:44] Found k8s binaries, skipping transfer
	I0116 03:13:38.619106 1011955 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0116 03:13:38.633715 1011955 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (388 bytes)
	I0116 03:13:38.651239 1011955 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0116 03:13:38.670877 1011955 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2115 bytes)
	I0116 03:13:38.689268 1011955 ssh_runner.go:195] Run: grep 192.168.72.158	control-plane.minikube.internal$ /etc/hosts
	I0116 03:13:38.694783 1011955 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.158	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 03:13:38.709936 1011955 certs.go:56] Setting up /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/default-k8s-diff-port-775571 for IP: 192.168.72.158
	I0116 03:13:38.709984 1011955 certs.go:190] acquiring lock for shared ca certs: {Name:mk7c5920652340a030ac1e36e0fe21cc8437c491 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:13:38.710196 1011955 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17967-971255/.minikube/ca.key
	I0116 03:13:38.710269 1011955 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17967-971255/.minikube/proxy-client-ca.key
	I0116 03:13:38.710379 1011955 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/default-k8s-diff-port-775571/client.key
	I0116 03:13:38.710471 1011955 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/default-k8s-diff-port-775571/apiserver.key.6c936bf0
	I0116 03:13:38.710533 1011955 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/default-k8s-diff-port-775571/proxy-client.key
	I0116 03:13:38.710677 1011955 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/home/jenkins/minikube-integration/17967-971255/.minikube/certs/978482.pem (1338 bytes)
	W0116 03:13:38.710717 1011955 certs.go:433] ignoring /home/jenkins/minikube-integration/17967-971255/.minikube/certs/home/jenkins/minikube-integration/17967-971255/.minikube/certs/978482_empty.pem, impossibly tiny 0 bytes
	I0116 03:13:38.710734 1011955 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca-key.pem (1675 bytes)
	I0116 03:13:38.710771 1011955 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca.pem (1082 bytes)
	I0116 03:13:38.710810 1011955 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/home/jenkins/minikube-integration/17967-971255/.minikube/certs/cert.pem (1123 bytes)
	I0116 03:13:38.710849 1011955 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/home/jenkins/minikube-integration/17967-971255/.minikube/certs/key.pem (1675 bytes)
	I0116 03:13:38.710911 1011955 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-971255/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17967-971255/.minikube/files/etc/ssl/certs/9784822.pem (1708 bytes)
	I0116 03:13:38.711657 1011955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/default-k8s-diff-port-775571/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0116 03:13:38.742564 1011955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/default-k8s-diff-port-775571/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0116 03:13:38.770741 1011955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/default-k8s-diff-port-775571/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0116 03:13:38.795401 1011955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/default-k8s-diff-port-775571/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0116 03:13:38.819574 1011955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0116 03:13:38.847962 1011955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0116 03:13:38.872537 1011955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0116 03:13:38.898930 1011955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0116 03:13:38.924558 1011955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0116 03:13:38.950417 1011955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/certs/978482.pem --> /usr/share/ca-certificates/978482.pem (1338 bytes)
	I0116 03:13:38.976115 1011955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/files/etc/ssl/certs/9784822.pem --> /usr/share/ca-certificates/9784822.pem (1708 bytes)
	I0116 03:13:39.008493 1011955 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0116 03:13:39.028392 1011955 ssh_runner.go:195] Run: openssl version
	I0116 03:13:39.034429 1011955 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/978482.pem && ln -fs /usr/share/ca-certificates/978482.pem /etc/ssl/certs/978482.pem"
	I0116 03:13:39.046541 1011955 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/978482.pem
	I0116 03:13:39.051560 1011955 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 16 02:10 /usr/share/ca-certificates/978482.pem
	I0116 03:13:39.051656 1011955 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/978482.pem
	I0116 03:13:39.058169 1011955 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/978482.pem /etc/ssl/certs/51391683.0"
	I0116 03:13:39.072168 1011955 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9784822.pem && ln -fs /usr/share/ca-certificates/9784822.pem /etc/ssl/certs/9784822.pem"
	I0116 03:13:39.086485 1011955 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9784822.pem
	I0116 03:13:39.091108 1011955 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 16 02:10 /usr/share/ca-certificates/9784822.pem
	I0116 03:13:39.091162 1011955 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9784822.pem
	I0116 03:13:39.098393 1011955 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9784822.pem /etc/ssl/certs/3ec20f2e.0"
	I0116 03:13:39.109323 1011955 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0116 03:13:39.121606 1011955 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:13:39.127187 1011955 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 16 02:01 /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:13:39.127263 1011955 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:13:39.134830 1011955 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0116 03:13:39.149731 1011955 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0116 03:13:39.156181 1011955 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0116 03:13:39.164095 1011955 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0116 03:13:39.172662 1011955 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0116 03:13:39.180598 1011955 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0116 03:13:39.188640 1011955 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0116 03:13:39.197249 1011955 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0116 03:13:39.206289 1011955 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-775571 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:default-k8s-diff-port-775571 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.72.158 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extr
aDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 03:13:39.206442 1011955 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0116 03:13:39.206509 1011955 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 03:13:39.259399 1011955 cri.go:89] found id: ""
	I0116 03:13:39.259481 1011955 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0116 03:13:39.273356 1011955 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0116 03:13:39.273385 1011955 kubeadm.go:636] restartCluster start
	I0116 03:13:39.273474 1011955 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0116 03:13:39.287459 1011955 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:39.288748 1011955 kubeconfig.go:92] found "default-k8s-diff-port-775571" server: "https://192.168.72.158:8444"
	I0116 03:13:39.291777 1011955 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0116 03:13:39.304936 1011955 api_server.go:166] Checking apiserver status ...
	I0116 03:13:39.305013 1011955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:39.321035 1011955 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:39.805691 1011955 api_server.go:166] Checking apiserver status ...
	I0116 03:13:39.805843 1011955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:39.821119 1011955 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:40.305352 1011955 api_server.go:166] Checking apiserver status ...
	I0116 03:13:40.305464 1011955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:40.320908 1011955 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:40.205526 1011681 api_server.go:279] https://192.168.39.91:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0116 03:13:40.417347 1011681 api_server.go:103] status: https://192.168.39.91:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0116 03:13:40.417381 1011681 api_server.go:253] Checking apiserver healthz at https://192.168.39.91:8443/healthz ...
	I0116 03:13:40.626819 1011681 api_server.go:279] https://192.168.39.91:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0116 03:13:40.626875 1011681 api_server.go:103] status: https://192.168.39.91:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0116 03:13:40.759016 1011681 api_server.go:253] Checking apiserver healthz at https://192.168.39.91:8443/healthz ...
	I0116 03:13:40.769794 1011681 api_server.go:279] https://192.168.39.91:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0116 03:13:40.769867 1011681 api_server.go:103] status: https://192.168.39.91:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0116 03:13:41.258280 1011681 api_server.go:253] Checking apiserver healthz at https://192.168.39.91:8443/healthz ...
	I0116 03:13:41.268104 1011681 api_server.go:279] https://192.168.39.91:8443/healthz returned 200:
	ok
	I0116 03:13:41.276527 1011681 api_server.go:141] control plane version: v1.16.0
	I0116 03:13:41.276576 1011681 api_server.go:131] duration metric: took 9.018477008s to wait for apiserver health ...
	I0116 03:13:41.276587 1011681 cni.go:84] Creating CNI manager for ""
	I0116 03:13:41.276593 1011681 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 03:13:41.278640 1011681 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0116 03:13:37.847223 1011460 main.go:141] libmachine: (no-preload-934668) DBG | domain no-preload-934668 has defined MAC address 52:54:00:96:89:86 in network mk-no-preload-934668
	I0116 03:13:37.847666 1011460 main.go:141] libmachine: (no-preload-934668) DBG | unable to find current IP address of domain no-preload-934668 in network mk-no-preload-934668
	I0116 03:13:37.847702 1011460 main.go:141] libmachine: (no-preload-934668) DBG | I0116 03:13:37.847614 1012976 retry.go:31] will retry after 1.639650566s: waiting for machine to come up
	I0116 03:13:39.488850 1011460 main.go:141] libmachine: (no-preload-934668) DBG | domain no-preload-934668 has defined MAC address 52:54:00:96:89:86 in network mk-no-preload-934668
	I0116 03:13:39.489197 1011460 main.go:141] libmachine: (no-preload-934668) DBG | unable to find current IP address of domain no-preload-934668 in network mk-no-preload-934668
	I0116 03:13:39.489230 1011460 main.go:141] libmachine: (no-preload-934668) DBG | I0116 03:13:39.489145 1012976 retry.go:31] will retry after 2.106627157s: waiting for machine to come up
	I0116 03:13:41.598019 1011460 main.go:141] libmachine: (no-preload-934668) DBG | domain no-preload-934668 has defined MAC address 52:54:00:96:89:86 in network mk-no-preload-934668
	I0116 03:13:41.598601 1011460 main.go:141] libmachine: (no-preload-934668) DBG | unable to find current IP address of domain no-preload-934668 in network mk-no-preload-934668
	I0116 03:13:41.598635 1011460 main.go:141] libmachine: (no-preload-934668) DBG | I0116 03:13:41.598540 1012976 retry.go:31] will retry after 2.493521899s: waiting for machine to come up
	I0116 03:13:39.963772 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:13:41.965748 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:13:41.280699 1011681 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0116 03:13:41.300296 1011681 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0116 03:13:41.341944 1011681 system_pods.go:43] waiting for kube-system pods to appear ...
	I0116 03:13:41.361578 1011681 system_pods.go:59] 7 kube-system pods found
	I0116 03:13:41.361618 1011681 system_pods.go:61] "coredns-5644d7b6d9-5j7ps" [d1ccd80c-b19b-49ae-bc1c-deee7f0db229] Running
	I0116 03:13:41.361627 1011681 system_pods.go:61] "etcd-old-k8s-version-788237" [4a34c524-dce0-4c01-a1f2-291a59c02044] Running
	I0116 03:13:41.361634 1011681 system_pods.go:61] "kube-apiserver-old-k8s-version-788237" [2b802f72-d63e-423d-ac43-89b836bd4b70] Running
	I0116 03:13:41.361640 1011681 system_pods.go:61] "kube-controller-manager-old-k8s-version-788237" [a41d42f1-0587-4cb6-965f-fffdb8bcde5d] Running
	I0116 03:13:41.361645 1011681 system_pods.go:61] "kube-proxy-vtxjk" [4993e4ef-5193-4632-a61a-a0b38601239d] Running
	I0116 03:13:41.361651 1011681 system_pods.go:61] "kube-scheduler-old-k8s-version-788237" [712a30dc-0217-47d4-88ba-d63f6f2f6d02] Running
	I0116 03:13:41.361662 1011681 system_pods.go:61] "storage-provisioner" [2e43ef59-3c6b-4c78-81ae-71dbd0eaddfd] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0116 03:13:41.361680 1011681 system_pods.go:74] duration metric: took 19.701772ms to wait for pod list to return data ...
	I0116 03:13:41.361698 1011681 node_conditions.go:102] verifying NodePressure condition ...
	I0116 03:13:41.366876 1011681 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 03:13:41.366918 1011681 node_conditions.go:123] node cpu capacity is 2
	I0116 03:13:41.366933 1011681 node_conditions.go:105] duration metric: took 5.228319ms to run NodePressure ...
	I0116 03:13:41.366961 1011681 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:13:41.921064 1011681 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0116 03:13:41.925272 1011681 retry.go:31] will retry after 140.477343ms: kubelet not initialised
	I0116 03:13:42.072065 1011681 retry.go:31] will retry after 346.605533ms: kubelet not initialised
	I0116 03:13:42.428950 1011681 retry.go:31] will retry after 456.811796ms: kubelet not initialised
	I0116 03:13:42.893528 1011681 retry.go:31] will retry after 821.458486ms: kubelet not initialised
	I0116 03:13:43.721228 1011681 retry.go:31] will retry after 1.260888799s: kubelet not initialised
	I0116 03:13:44.988346 1011681 retry.go:31] will retry after 1.183564266s: kubelet not initialised
	I0116 03:13:40.805756 1011955 api_server.go:166] Checking apiserver status ...
	I0116 03:13:40.805890 1011955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:40.823823 1011955 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:41.305065 1011955 api_server.go:166] Checking apiserver status ...
	I0116 03:13:41.305161 1011955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:41.317967 1011955 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:41.805703 1011955 api_server.go:166] Checking apiserver status ...
	I0116 03:13:41.805813 1011955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:41.819698 1011955 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:42.305067 1011955 api_server.go:166] Checking apiserver status ...
	I0116 03:13:42.305209 1011955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:42.318643 1011955 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:42.805284 1011955 api_server.go:166] Checking apiserver status ...
	I0116 03:13:42.805381 1011955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:42.821975 1011955 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:43.305106 1011955 api_server.go:166] Checking apiserver status ...
	I0116 03:13:43.305234 1011955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:43.318457 1011955 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:43.805741 1011955 api_server.go:166] Checking apiserver status ...
	I0116 03:13:43.805902 1011955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:43.820562 1011955 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:44.305077 1011955 api_server.go:166] Checking apiserver status ...
	I0116 03:13:44.305217 1011955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:44.322452 1011955 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:44.805978 1011955 api_server.go:166] Checking apiserver status ...
	I0116 03:13:44.806111 1011955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:44.822302 1011955 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:45.305330 1011955 api_server.go:166] Checking apiserver status ...
	I0116 03:13:45.305432 1011955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:45.317788 1011955 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:44.095061 1011460 main.go:141] libmachine: (no-preload-934668) DBG | domain no-preload-934668 has defined MAC address 52:54:00:96:89:86 in network mk-no-preload-934668
	I0116 03:13:44.095629 1011460 main.go:141] libmachine: (no-preload-934668) DBG | unable to find current IP address of domain no-preload-934668 in network mk-no-preload-934668
	I0116 03:13:44.095658 1011460 main.go:141] libmachine: (no-preload-934668) DBG | I0116 03:13:44.095576 1012976 retry.go:31] will retry after 3.106364447s: waiting for machine to come up
	I0116 03:13:47.203798 1011460 main.go:141] libmachine: (no-preload-934668) DBG | domain no-preload-934668 has defined MAC address 52:54:00:96:89:86 in network mk-no-preload-934668
	I0116 03:13:47.204278 1011460 main.go:141] libmachine: (no-preload-934668) DBG | unable to find current IP address of domain no-preload-934668 in network mk-no-preload-934668
	I0116 03:13:47.204310 1011460 main.go:141] libmachine: (no-preload-934668) DBG | I0116 03:13:47.204216 1012976 retry.go:31] will retry after 3.186263998s: waiting for machine to come up
	I0116 03:13:44.462154 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:13:46.467556 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:13:46.177475 1011681 retry.go:31] will retry after 2.879508446s: kubelet not initialised
	I0116 03:13:49.062319 1011681 retry.go:31] will retry after 3.01676683s: kubelet not initialised
	I0116 03:13:45.805770 1011955 api_server.go:166] Checking apiserver status ...
	I0116 03:13:45.805896 1011955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:45.822222 1011955 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:46.305853 1011955 api_server.go:166] Checking apiserver status ...
	I0116 03:13:46.305977 1011955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:46.322927 1011955 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:46.805392 1011955 api_server.go:166] Checking apiserver status ...
	I0116 03:13:46.805501 1011955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:46.822012 1011955 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:47.305518 1011955 api_server.go:166] Checking apiserver status ...
	I0116 03:13:47.305634 1011955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:47.322371 1011955 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:47.805932 1011955 api_server.go:166] Checking apiserver status ...
	I0116 03:13:47.806027 1011955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:47.821119 1011955 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:48.305696 1011955 api_server.go:166] Checking apiserver status ...
	I0116 03:13:48.305832 1011955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:48.318366 1011955 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:48.805946 1011955 api_server.go:166] Checking apiserver status ...
	I0116 03:13:48.806039 1011955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:48.819066 1011955 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:49.305780 1011955 api_server.go:166] Checking apiserver status ...
	I0116 03:13:49.305922 1011955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:49.318542 1011955 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:49.318576 1011955 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0116 03:13:49.318588 1011955 kubeadm.go:1135] stopping kube-system containers ...
	I0116 03:13:49.318602 1011955 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0116 03:13:49.318663 1011955 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 03:13:49.361552 1011955 cri.go:89] found id: ""
	I0116 03:13:49.361636 1011955 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0116 03:13:49.378478 1011955 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0116 03:13:49.389158 1011955 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0116 03:13:49.389248 1011955 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0116 03:13:49.398973 1011955 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0116 03:13:49.399019 1011955 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:13:49.516974 1011955 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:13:50.394812 1011460 main.go:141] libmachine: (no-preload-934668) DBG | domain no-preload-934668 has defined MAC address 52:54:00:96:89:86 in network mk-no-preload-934668
	I0116 03:13:50.395295 1011460 main.go:141] libmachine: (no-preload-934668) DBG | domain no-preload-934668 has current primary IP address 192.168.50.29 and MAC address 52:54:00:96:89:86 in network mk-no-preload-934668
	I0116 03:13:50.395323 1011460 main.go:141] libmachine: (no-preload-934668) Found IP for machine: 192.168.50.29
	I0116 03:13:50.395338 1011460 main.go:141] libmachine: (no-preload-934668) Reserving static IP address...
	I0116 03:13:50.395804 1011460 main.go:141] libmachine: (no-preload-934668) DBG | found host DHCP lease matching {name: "no-preload-934668", mac: "52:54:00:96:89:86", ip: "192.168.50.29"} in network mk-no-preload-934668: {Iface:virbr2 ExpiryTime:2024-01-16 04:13:45 +0000 UTC Type:0 Mac:52:54:00:96:89:86 Iaid: IPaddr:192.168.50.29 Prefix:24 Hostname:no-preload-934668 Clientid:01:52:54:00:96:89:86}
	I0116 03:13:50.395830 1011460 main.go:141] libmachine: (no-preload-934668) Reserved static IP address: 192.168.50.29
	I0116 03:13:50.395851 1011460 main.go:141] libmachine: (no-preload-934668) DBG | skip adding static IP to network mk-no-preload-934668 - found existing host DHCP lease matching {name: "no-preload-934668", mac: "52:54:00:96:89:86", ip: "192.168.50.29"}
	I0116 03:13:50.395880 1011460 main.go:141] libmachine: (no-preload-934668) DBG | Getting to WaitForSSH function...
	I0116 03:13:50.395898 1011460 main.go:141] libmachine: (no-preload-934668) Waiting for SSH to be available...
	I0116 03:13:50.398256 1011460 main.go:141] libmachine: (no-preload-934668) DBG | domain no-preload-934668 has defined MAC address 52:54:00:96:89:86 in network mk-no-preload-934668
	I0116 03:13:50.398608 1011460 main.go:141] libmachine: (no-preload-934668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:89:86", ip: ""} in network mk-no-preload-934668: {Iface:virbr2 ExpiryTime:2024-01-16 04:13:45 +0000 UTC Type:0 Mac:52:54:00:96:89:86 Iaid: IPaddr:192.168.50.29 Prefix:24 Hostname:no-preload-934668 Clientid:01:52:54:00:96:89:86}
	I0116 03:13:50.398652 1011460 main.go:141] libmachine: (no-preload-934668) DBG | domain no-preload-934668 has defined IP address 192.168.50.29 and MAC address 52:54:00:96:89:86 in network mk-no-preload-934668
	I0116 03:13:50.398838 1011460 main.go:141] libmachine: (no-preload-934668) DBG | Using SSH client type: external
	I0116 03:13:50.398864 1011460 main.go:141] libmachine: (no-preload-934668) DBG | Using SSH private key: /home/jenkins/minikube-integration/17967-971255/.minikube/machines/no-preload-934668/id_rsa (-rw-------)
	I0116 03:13:50.398917 1011460 main.go:141] libmachine: (no-preload-934668) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.29 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17967-971255/.minikube/machines/no-preload-934668/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0116 03:13:50.398936 1011460 main.go:141] libmachine: (no-preload-934668) DBG | About to run SSH command:
	I0116 03:13:50.398949 1011460 main.go:141] libmachine: (no-preload-934668) DBG | exit 0
	I0116 03:13:50.489493 1011460 main.go:141] libmachine: (no-preload-934668) DBG | SSH cmd err, output: <nil>: 
	I0116 03:13:50.489954 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetConfigRaw
	I0116 03:13:50.490626 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetIP
	I0116 03:13:50.493468 1011460 main.go:141] libmachine: (no-preload-934668) DBG | domain no-preload-934668 has defined MAC address 52:54:00:96:89:86 in network mk-no-preload-934668
	I0116 03:13:50.493892 1011460 main.go:141] libmachine: (no-preload-934668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:89:86", ip: ""} in network mk-no-preload-934668: {Iface:virbr2 ExpiryTime:2024-01-16 04:13:45 +0000 UTC Type:0 Mac:52:54:00:96:89:86 Iaid: IPaddr:192.168.50.29 Prefix:24 Hostname:no-preload-934668 Clientid:01:52:54:00:96:89:86}
	I0116 03:13:50.493943 1011460 main.go:141] libmachine: (no-preload-934668) DBG | domain no-preload-934668 has defined IP address 192.168.50.29 and MAC address 52:54:00:96:89:86 in network mk-no-preload-934668
	I0116 03:13:50.494329 1011460 profile.go:148] Saving config to /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/no-preload-934668/config.json ...
	I0116 03:13:50.494545 1011460 machine.go:88] provisioning docker machine ...
	I0116 03:13:50.494566 1011460 main.go:141] libmachine: (no-preload-934668) Calling .DriverName
	I0116 03:13:50.494837 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetMachineName
	I0116 03:13:50.495038 1011460 buildroot.go:166] provisioning hostname "no-preload-934668"
	I0116 03:13:50.495067 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetMachineName
	I0116 03:13:50.495216 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetSSHHostname
	I0116 03:13:50.497623 1011460 main.go:141] libmachine: (no-preload-934668) DBG | domain no-preload-934668 has defined MAC address 52:54:00:96:89:86 in network mk-no-preload-934668
	I0116 03:13:50.498048 1011460 main.go:141] libmachine: (no-preload-934668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:89:86", ip: ""} in network mk-no-preload-934668: {Iface:virbr2 ExpiryTime:2024-01-16 04:13:45 +0000 UTC Type:0 Mac:52:54:00:96:89:86 Iaid: IPaddr:192.168.50.29 Prefix:24 Hostname:no-preload-934668 Clientid:01:52:54:00:96:89:86}
	I0116 03:13:50.498068 1011460 main.go:141] libmachine: (no-preload-934668) DBG | domain no-preload-934668 has defined IP address 192.168.50.29 and MAC address 52:54:00:96:89:86 in network mk-no-preload-934668
	I0116 03:13:50.498226 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetSSHPort
	I0116 03:13:50.498413 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetSSHKeyPath
	I0116 03:13:50.498569 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetSSHKeyPath
	I0116 03:13:50.498711 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetSSHUsername
	I0116 03:13:50.498887 1011460 main.go:141] libmachine: Using SSH client type: native
	I0116 03:13:50.499381 1011460 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.50.29 22 <nil> <nil>}
	I0116 03:13:50.499400 1011460 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-934668 && echo "no-preload-934668" | sudo tee /etc/hostname
	I0116 03:13:50.632759 1011460 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-934668
	
	I0116 03:13:50.632795 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetSSHHostname
	I0116 03:13:50.636057 1011460 main.go:141] libmachine: (no-preload-934668) DBG | domain no-preload-934668 has defined MAC address 52:54:00:96:89:86 in network mk-no-preload-934668
	I0116 03:13:50.636489 1011460 main.go:141] libmachine: (no-preload-934668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:89:86", ip: ""} in network mk-no-preload-934668: {Iface:virbr2 ExpiryTime:2024-01-16 04:13:45 +0000 UTC Type:0 Mac:52:54:00:96:89:86 Iaid: IPaddr:192.168.50.29 Prefix:24 Hostname:no-preload-934668 Clientid:01:52:54:00:96:89:86}
	I0116 03:13:50.636523 1011460 main.go:141] libmachine: (no-preload-934668) DBG | domain no-preload-934668 has defined IP address 192.168.50.29 and MAC address 52:54:00:96:89:86 in network mk-no-preload-934668
	I0116 03:13:50.636684 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetSSHPort
	I0116 03:13:50.636965 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetSSHKeyPath
	I0116 03:13:50.637189 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetSSHKeyPath
	I0116 03:13:50.637383 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetSSHUsername
	I0116 03:13:50.637560 1011460 main.go:141] libmachine: Using SSH client type: native
	I0116 03:13:50.637994 1011460 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.50.29 22 <nil> <nil>}
	I0116 03:13:50.638021 1011460 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-934668' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-934668/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-934668' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0116 03:13:50.765312 1011460 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0116 03:13:50.765351 1011460 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17967-971255/.minikube CaCertPath:/home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17967-971255/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17967-971255/.minikube}
	I0116 03:13:50.765380 1011460 buildroot.go:174] setting up certificates
	I0116 03:13:50.765395 1011460 provision.go:83] configureAuth start
	I0116 03:13:50.765408 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetMachineName
	I0116 03:13:50.765746 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetIP
	I0116 03:13:50.769190 1011460 main.go:141] libmachine: (no-preload-934668) DBG | domain no-preload-934668 has defined MAC address 52:54:00:96:89:86 in network mk-no-preload-934668
	I0116 03:13:50.769597 1011460 main.go:141] libmachine: (no-preload-934668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:89:86", ip: ""} in network mk-no-preload-934668: {Iface:virbr2 ExpiryTime:2024-01-16 04:13:45 +0000 UTC Type:0 Mac:52:54:00:96:89:86 Iaid: IPaddr:192.168.50.29 Prefix:24 Hostname:no-preload-934668 Clientid:01:52:54:00:96:89:86}
	I0116 03:13:50.769670 1011460 main.go:141] libmachine: (no-preload-934668) DBG | domain no-preload-934668 has defined IP address 192.168.50.29 and MAC address 52:54:00:96:89:86 in network mk-no-preload-934668
	I0116 03:13:50.769902 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetSSHHostname
	I0116 03:13:50.772879 1011460 main.go:141] libmachine: (no-preload-934668) DBG | domain no-preload-934668 has defined MAC address 52:54:00:96:89:86 in network mk-no-preload-934668
	I0116 03:13:50.773334 1011460 main.go:141] libmachine: (no-preload-934668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:89:86", ip: ""} in network mk-no-preload-934668: {Iface:virbr2 ExpiryTime:2024-01-16 04:13:45 +0000 UTC Type:0 Mac:52:54:00:96:89:86 Iaid: IPaddr:192.168.50.29 Prefix:24 Hostname:no-preload-934668 Clientid:01:52:54:00:96:89:86}
	I0116 03:13:50.773367 1011460 main.go:141] libmachine: (no-preload-934668) DBG | domain no-preload-934668 has defined IP address 192.168.50.29 and MAC address 52:54:00:96:89:86 in network mk-no-preload-934668
	I0116 03:13:50.773660 1011460 provision.go:138] copyHostCerts
	I0116 03:13:50.773750 1011460 exec_runner.go:144] found /home/jenkins/minikube-integration/17967-971255/.minikube/ca.pem, removing ...
	I0116 03:13:50.773766 1011460 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17967-971255/.minikube/ca.pem
	I0116 03:13:50.773868 1011460 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17967-971255/.minikube/ca.pem (1082 bytes)
	I0116 03:13:50.774025 1011460 exec_runner.go:144] found /home/jenkins/minikube-integration/17967-971255/.minikube/cert.pem, removing ...
	I0116 03:13:50.774043 1011460 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17967-971255/.minikube/cert.pem
	I0116 03:13:50.774077 1011460 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17967-971255/.minikube/cert.pem (1123 bytes)
	I0116 03:13:50.774174 1011460 exec_runner.go:144] found /home/jenkins/minikube-integration/17967-971255/.minikube/key.pem, removing ...
	I0116 03:13:50.774187 1011460 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17967-971255/.minikube/key.pem
	I0116 03:13:50.774221 1011460 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17967-971255/.minikube/key.pem (1675 bytes)
	I0116 03:13:50.774317 1011460 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17967-971255/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca-key.pem org=jenkins.no-preload-934668 san=[192.168.50.29 192.168.50.29 localhost 127.0.0.1 minikube no-preload-934668]
	I0116 03:13:50.955273 1011460 provision.go:172] copyRemoteCerts
	I0116 03:13:50.955364 1011460 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0116 03:13:50.955404 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetSSHHostname
	I0116 03:13:50.958601 1011460 main.go:141] libmachine: (no-preload-934668) DBG | domain no-preload-934668 has defined MAC address 52:54:00:96:89:86 in network mk-no-preload-934668
	I0116 03:13:50.958977 1011460 main.go:141] libmachine: (no-preload-934668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:89:86", ip: ""} in network mk-no-preload-934668: {Iface:virbr2 ExpiryTime:2024-01-16 04:13:45 +0000 UTC Type:0 Mac:52:54:00:96:89:86 Iaid: IPaddr:192.168.50.29 Prefix:24 Hostname:no-preload-934668 Clientid:01:52:54:00:96:89:86}
	I0116 03:13:50.959013 1011460 main.go:141] libmachine: (no-preload-934668) DBG | domain no-preload-934668 has defined IP address 192.168.50.29 and MAC address 52:54:00:96:89:86 in network mk-no-preload-934668
	I0116 03:13:50.959258 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetSSHPort
	I0116 03:13:50.959495 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetSSHKeyPath
	I0116 03:13:50.959704 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetSSHUsername
	I0116 03:13:50.959878 1011460 sshutil.go:53] new ssh client: &{IP:192.168.50.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/no-preload-934668/id_rsa Username:docker}
	I0116 03:13:51.047852 1011460 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0116 03:13:51.079250 1011460 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0116 03:13:51.110170 1011460 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0116 03:13:51.137342 1011460 provision.go:86] duration metric: configureAuth took 371.929858ms
	I0116 03:13:51.137376 1011460 buildroot.go:189] setting minikube options for container-runtime
	I0116 03:13:51.137602 1011460 config.go:182] Loaded profile config "no-preload-934668": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0116 03:13:51.137690 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetSSHHostname
	I0116 03:13:51.140451 1011460 main.go:141] libmachine: (no-preload-934668) DBG | domain no-preload-934668 has defined MAC address 52:54:00:96:89:86 in network mk-no-preload-934668
	I0116 03:13:51.140935 1011460 main.go:141] libmachine: (no-preload-934668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:89:86", ip: ""} in network mk-no-preload-934668: {Iface:virbr2 ExpiryTime:2024-01-16 04:13:45 +0000 UTC Type:0 Mac:52:54:00:96:89:86 Iaid: IPaddr:192.168.50.29 Prefix:24 Hostname:no-preload-934668 Clientid:01:52:54:00:96:89:86}
	I0116 03:13:51.140963 1011460 main.go:141] libmachine: (no-preload-934668) DBG | domain no-preload-934668 has defined IP address 192.168.50.29 and MAC address 52:54:00:96:89:86 in network mk-no-preload-934668
	I0116 03:13:51.141217 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetSSHPort
	I0116 03:13:51.141435 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetSSHKeyPath
	I0116 03:13:51.141604 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetSSHKeyPath
	I0116 03:13:51.141726 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetSSHUsername
	I0116 03:13:51.141913 1011460 main.go:141] libmachine: Using SSH client type: native
	I0116 03:13:51.142238 1011460 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.50.29 22 <nil> <nil>}
	I0116 03:13:51.142267 1011460 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0116 03:13:51.468734 1011460 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0116 03:13:51.468771 1011460 machine.go:91] provisioned docker machine in 974.21023ms
	I0116 03:13:51.468786 1011460 start.go:300] post-start starting for "no-preload-934668" (driver="kvm2")
	I0116 03:13:51.468803 1011460 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0116 03:13:51.468828 1011460 main.go:141] libmachine: (no-preload-934668) Calling .DriverName
	I0116 03:13:51.469200 1011460 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0116 03:13:51.469228 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetSSHHostname
	I0116 03:13:51.472154 1011460 main.go:141] libmachine: (no-preload-934668) DBG | domain no-preload-934668 has defined MAC address 52:54:00:96:89:86 in network mk-no-preload-934668
	I0116 03:13:51.472614 1011460 main.go:141] libmachine: (no-preload-934668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:89:86", ip: ""} in network mk-no-preload-934668: {Iface:virbr2 ExpiryTime:2024-01-16 04:13:45 +0000 UTC Type:0 Mac:52:54:00:96:89:86 Iaid: IPaddr:192.168.50.29 Prefix:24 Hostname:no-preload-934668 Clientid:01:52:54:00:96:89:86}
	I0116 03:13:51.472665 1011460 main.go:141] libmachine: (no-preload-934668) DBG | domain no-preload-934668 has defined IP address 192.168.50.29 and MAC address 52:54:00:96:89:86 in network mk-no-preload-934668
	I0116 03:13:51.472794 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetSSHPort
	I0116 03:13:51.472991 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetSSHKeyPath
	I0116 03:13:51.473167 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetSSHUsername
	I0116 03:13:51.473321 1011460 sshutil.go:53] new ssh client: &{IP:192.168.50.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/no-preload-934668/id_rsa Username:docker}
	I0116 03:13:51.558257 1011460 ssh_runner.go:195] Run: cat /etc/os-release
	I0116 03:13:51.563146 1011460 info.go:137] Remote host: Buildroot 2021.02.12
	I0116 03:13:51.563178 1011460 filesync.go:126] Scanning /home/jenkins/minikube-integration/17967-971255/.minikube/addons for local assets ...
	I0116 03:13:51.563243 1011460 filesync.go:126] Scanning /home/jenkins/minikube-integration/17967-971255/.minikube/files for local assets ...
	I0116 03:13:51.563339 1011460 filesync.go:149] local asset: /home/jenkins/minikube-integration/17967-971255/.minikube/files/etc/ssl/certs/9784822.pem -> 9784822.pem in /etc/ssl/certs
	I0116 03:13:51.563437 1011460 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0116 03:13:51.574145 1011460 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/files/etc/ssl/certs/9784822.pem --> /etc/ssl/certs/9784822.pem (1708 bytes)
	I0116 03:13:51.603071 1011460 start.go:303] post-start completed in 134.264931ms
	I0116 03:13:51.603104 1011460 fix.go:56] fixHost completed within 20.322632188s
	I0116 03:13:51.603128 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetSSHHostname
	I0116 03:13:51.606596 1011460 main.go:141] libmachine: (no-preload-934668) DBG | domain no-preload-934668 has defined MAC address 52:54:00:96:89:86 in network mk-no-preload-934668
	I0116 03:13:51.607040 1011460 main.go:141] libmachine: (no-preload-934668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:89:86", ip: ""} in network mk-no-preload-934668: {Iface:virbr2 ExpiryTime:2024-01-16 04:13:45 +0000 UTC Type:0 Mac:52:54:00:96:89:86 Iaid: IPaddr:192.168.50.29 Prefix:24 Hostname:no-preload-934668 Clientid:01:52:54:00:96:89:86}
	I0116 03:13:51.607094 1011460 main.go:141] libmachine: (no-preload-934668) DBG | domain no-preload-934668 has defined IP address 192.168.50.29 and MAC address 52:54:00:96:89:86 in network mk-no-preload-934668
	I0116 03:13:51.607312 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetSSHPort
	I0116 03:13:51.607554 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetSSHKeyPath
	I0116 03:13:51.607710 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetSSHKeyPath
	I0116 03:13:51.607896 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetSSHUsername
	I0116 03:13:51.608107 1011460 main.go:141] libmachine: Using SSH client type: native
	I0116 03:13:51.608461 1011460 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.50.29 22 <nil> <nil>}
	I0116 03:13:51.608472 1011460 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0116 03:13:51.724098 1011460 main.go:141] libmachine: SSH cmd err, output: <nil>: 1705374831.664998093
	
	I0116 03:13:51.724128 1011460 fix.go:206] guest clock: 1705374831.664998093
	I0116 03:13:51.724137 1011460 fix.go:219] Guest: 2024-01-16 03:13:51.664998093 +0000 UTC Remote: 2024-01-16 03:13:51.60310878 +0000 UTC m=+359.363375393 (delta=61.889313ms)
	I0116 03:13:51.724164 1011460 fix.go:190] guest clock delta is within tolerance: 61.889313ms
	I0116 03:13:51.724171 1011460 start.go:83] releasing machines lock for "no-preload-934668", held for 20.443784472s
	I0116 03:13:51.724202 1011460 main.go:141] libmachine: (no-preload-934668) Calling .DriverName
	I0116 03:13:51.724534 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetIP
	I0116 03:13:51.727999 1011460 main.go:141] libmachine: (no-preload-934668) DBG | domain no-preload-934668 has defined MAC address 52:54:00:96:89:86 in network mk-no-preload-934668
	I0116 03:13:51.728527 1011460 main.go:141] libmachine: (no-preload-934668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:89:86", ip: ""} in network mk-no-preload-934668: {Iface:virbr2 ExpiryTime:2024-01-16 04:13:45 +0000 UTC Type:0 Mac:52:54:00:96:89:86 Iaid: IPaddr:192.168.50.29 Prefix:24 Hostname:no-preload-934668 Clientid:01:52:54:00:96:89:86}
	I0116 03:13:51.728562 1011460 main.go:141] libmachine: (no-preload-934668) DBG | domain no-preload-934668 has defined IP address 192.168.50.29 and MAC address 52:54:00:96:89:86 in network mk-no-preload-934668
	I0116 03:13:51.728809 1011460 main.go:141] libmachine: (no-preload-934668) Calling .DriverName
	I0116 03:13:51.729469 1011460 main.go:141] libmachine: (no-preload-934668) Calling .DriverName
	I0116 03:13:51.729704 1011460 main.go:141] libmachine: (no-preload-934668) Calling .DriverName
	I0116 03:13:51.729819 1011460 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0116 03:13:51.729869 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetSSHHostname
	I0116 03:13:51.729958 1011460 ssh_runner.go:195] Run: cat /version.json
	I0116 03:13:51.729976 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetSSHHostname
	I0116 03:13:51.732965 1011460 main.go:141] libmachine: (no-preload-934668) DBG | domain no-preload-934668 has defined MAC address 52:54:00:96:89:86 in network mk-no-preload-934668
	I0116 03:13:51.733095 1011460 main.go:141] libmachine: (no-preload-934668) DBG | domain no-preload-934668 has defined MAC address 52:54:00:96:89:86 in network mk-no-preload-934668
	I0116 03:13:51.733424 1011460 main.go:141] libmachine: (no-preload-934668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:89:86", ip: ""} in network mk-no-preload-934668: {Iface:virbr2 ExpiryTime:2024-01-16 04:13:45 +0000 UTC Type:0 Mac:52:54:00:96:89:86 Iaid: IPaddr:192.168.50.29 Prefix:24 Hostname:no-preload-934668 Clientid:01:52:54:00:96:89:86}
	I0116 03:13:51.733451 1011460 main.go:141] libmachine: (no-preload-934668) DBG | domain no-preload-934668 has defined IP address 192.168.50.29 and MAC address 52:54:00:96:89:86 in network mk-no-preload-934668
	I0116 03:13:51.733528 1011460 main.go:141] libmachine: (no-preload-934668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:89:86", ip: ""} in network mk-no-preload-934668: {Iface:virbr2 ExpiryTime:2024-01-16 04:13:45 +0000 UTC Type:0 Mac:52:54:00:96:89:86 Iaid: IPaddr:192.168.50.29 Prefix:24 Hostname:no-preload-934668 Clientid:01:52:54:00:96:89:86}
	I0116 03:13:51.733550 1011460 main.go:141] libmachine: (no-preload-934668) DBG | domain no-preload-934668 has defined IP address 192.168.50.29 and MAC address 52:54:00:96:89:86 in network mk-no-preload-934668
	I0116 03:13:51.733591 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetSSHPort
	I0116 03:13:51.733725 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetSSHPort
	I0116 03:13:51.733841 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetSSHKeyPath
	I0116 03:13:51.733972 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetSSHKeyPath
	I0116 03:13:51.733998 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetSSHUsername
	I0116 03:13:51.734170 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetSSHUsername
	I0116 03:13:51.734205 1011460 sshutil.go:53] new ssh client: &{IP:192.168.50.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/no-preload-934668/id_rsa Username:docker}
	I0116 03:13:51.734306 1011460 sshutil.go:53] new ssh client: &{IP:192.168.50.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/no-preload-934668/id_rsa Username:docker}
	I0116 03:13:51.819882 1011460 ssh_runner.go:195] Run: systemctl --version
	I0116 03:13:51.848935 1011460 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0116 03:13:52.005460 1011460 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0116 03:13:52.012691 1011460 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0116 03:13:52.012799 1011460 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0116 03:13:52.031857 1011460 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0116 03:13:52.031884 1011460 start.go:475] detecting cgroup driver to use...
	I0116 03:13:52.031950 1011460 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0116 03:13:52.049305 1011460 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0116 03:13:52.063332 1011460 docker.go:217] disabling cri-docker service (if available) ...
	I0116 03:13:52.063407 1011460 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0116 03:13:52.080341 1011460 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0116 03:13:52.099750 1011460 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0116 03:13:52.241916 1011460 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0116 03:13:52.374908 1011460 docker.go:233] disabling docker service ...
	I0116 03:13:52.375010 1011460 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0116 03:13:52.393531 1011460 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0116 03:13:52.410744 1011460 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0116 03:13:52.545990 1011460 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0116 03:13:52.677872 1011460 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0116 03:13:52.692652 1011460 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0116 03:13:52.711774 1011460 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0116 03:13:52.711871 1011460 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:13:52.722079 1011460 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0116 03:13:52.722179 1011460 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:13:52.732784 1011460 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:13:52.742863 1011460 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:13:52.752987 1011460 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0116 03:13:52.764401 1011460 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0116 03:13:52.773584 1011460 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0116 03:13:52.773668 1011460 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0116 03:13:52.787400 1011460 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0116 03:13:52.798262 1011460 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0116 03:13:52.928159 1011460 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0116 03:13:53.106967 1011460 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0116 03:13:53.107069 1011460 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0116 03:13:53.112312 1011460 start.go:543] Will wait 60s for crictl version
	I0116 03:13:53.112387 1011460 ssh_runner.go:195] Run: which crictl
	I0116 03:13:53.116701 1011460 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0116 03:13:53.166149 1011460 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0116 03:13:53.166246 1011460 ssh_runner.go:195] Run: crio --version
	I0116 03:13:53.227306 1011460 ssh_runner.go:195] Run: crio --version
	I0116 03:13:53.289601 1011460 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.24.1 ...
	I0116 03:13:48.961681 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:13:50.969620 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:13:53.462450 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:13:52.085958 1011681 retry.go:31] will retry after 4.051731251s: kubelet not initialised
	I0116 03:13:50.527883 1011955 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.010858065s)
	I0116 03:13:50.527951 1011955 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:13:50.734058 1011955 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:13:50.824872 1011955 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:13:50.919552 1011955 api_server.go:52] waiting for apiserver process to appear ...
	I0116 03:13:50.919679 1011955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:13:51.420316 1011955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:13:51.920460 1011955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:13:52.419846 1011955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:13:52.920241 1011955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:13:53.419933 1011955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:13:53.920527 1011955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:13:53.948958 1011955 api_server.go:72] duration metric: took 3.029405367s to wait for apiserver process to appear ...
	I0116 03:13:53.948990 1011955 api_server.go:88] waiting for apiserver healthz status ...
	I0116 03:13:53.949018 1011955 api_server.go:253] Checking apiserver healthz at https://192.168.72.158:8444/healthz ...
	I0116 03:13:53.291126 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetIP
	I0116 03:13:53.294326 1011460 main.go:141] libmachine: (no-preload-934668) DBG | domain no-preload-934668 has defined MAC address 52:54:00:96:89:86 in network mk-no-preload-934668
	I0116 03:13:53.294780 1011460 main.go:141] libmachine: (no-preload-934668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:89:86", ip: ""} in network mk-no-preload-934668: {Iface:virbr2 ExpiryTime:2024-01-16 04:13:45 +0000 UTC Type:0 Mac:52:54:00:96:89:86 Iaid: IPaddr:192.168.50.29 Prefix:24 Hostname:no-preload-934668 Clientid:01:52:54:00:96:89:86}
	I0116 03:13:53.294833 1011460 main.go:141] libmachine: (no-preload-934668) DBG | domain no-preload-934668 has defined IP address 192.168.50.29 and MAC address 52:54:00:96:89:86 in network mk-no-preload-934668
	I0116 03:13:53.295093 1011460 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0116 03:13:53.300971 1011460 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 03:13:53.316040 1011460 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0116 03:13:53.316107 1011460 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 03:13:53.368111 1011460 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0116 03:13:53.368138 1011460 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.2 registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 registry.k8s.io/kube-scheduler:v1.29.0-rc.2 registry.k8s.io/kube-proxy:v1.29.0-rc.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0116 03:13:53.368196 1011460 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 03:13:53.368485 1011460 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0116 03:13:53.368569 1011460 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I0116 03:13:53.368584 1011460 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0116 03:13:53.368596 1011460 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0116 03:13:53.368607 1011460 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0116 03:13:53.368626 1011460 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0116 03:13:53.368669 1011460 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0116 03:13:53.370675 1011460 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I0116 03:13:53.370735 1011460 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0116 03:13:53.371123 1011460 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0116 03:13:53.371132 1011460 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0116 03:13:53.371191 1011460 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0116 03:13:53.371333 1011460 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0116 03:13:53.371456 1011460 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 03:13:53.371815 1011460 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0116 03:13:53.515854 1011460 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I0116 03:13:53.524922 1011460 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0116 03:13:53.531697 1011460 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0116 03:13:53.540206 1011460 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0116 03:13:53.543219 1011460 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0116 03:13:53.546913 1011460 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0116 03:13:53.580609 1011460 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0116 03:13:53.610214 1011460 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I0116 03:13:53.610281 1011460 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I0116 03:13:53.610353 1011460 ssh_runner.go:195] Run: which crictl
	I0116 03:13:53.677663 1011460 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 03:13:53.687535 1011460 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" does not exist at hash "bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f" in container runtime
	I0116 03:13:53.687595 1011460 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0116 03:13:53.687599 1011460 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" does not exist at hash "4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210" in container runtime
	I0116 03:13:53.687638 1011460 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0116 03:13:53.687667 1011460 ssh_runner.go:195] Run: which crictl
	I0116 03:13:53.687717 1011460 ssh_runner.go:195] Run: which crictl
	I0116 03:13:53.862729 1011460 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.2" does not exist at hash "cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834" in container runtime
	I0116 03:13:53.862804 1011460 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0116 03:13:53.862830 1011460 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" does not exist at hash "d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d" in container runtime
	I0116 03:13:53.862929 1011460 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0116 03:13:53.863101 1011460 ssh_runner.go:195] Run: which crictl
	I0116 03:13:53.863151 1011460 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0116 03:13:53.862947 1011460 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0116 03:13:53.863216 1011460 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0116 03:13:53.863098 1011460 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0116 03:13:53.863245 1011460 ssh_runner.go:195] Run: which crictl
	I0116 03:13:53.863264 1011460 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 03:13:53.862873 1011460 ssh_runner.go:195] Run: which crictl
	I0116 03:13:53.863311 1011460 ssh_runner.go:195] Run: which crictl
	I0116 03:13:53.863060 1011460 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I0116 03:13:53.863156 1011460 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0116 03:13:53.928805 1011460 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0116 03:13:53.968913 1011460 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17967-971255/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I0116 03:13:53.969132 1011460 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I0116 03:13:53.974631 1011460 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17967-971255/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I0116 03:13:53.974701 1011460 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17967-971255/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I0116 03:13:53.974754 1011460 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0116 03:13:53.974928 1011460 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0116 03:13:53.974792 1011460 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0116 03:13:53.974818 1011460 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0116 03:13:53.974833 1011460 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 03:13:54.018085 1011460 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17967-971255/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I0116 03:13:54.018198 1011460 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0116 03:13:54.018288 1011460 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I0116 03:13:54.018300 1011460 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I0116 03:13:54.018326 1011460 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I0116 03:13:54.086983 1011460 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17967-971255/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0116 03:13:54.087041 1011460 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17967-971255/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0116 03:13:54.087074 1011460 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17967-971255/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I0116 03:13:54.087111 1011460 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0116 03:13:54.087147 1011460 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0116 03:13:54.087148 1011460 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0116 03:13:54.087203 1011460 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2 (exists)
	I0116 03:13:54.087245 1011460 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2 (exists)
	I0116 03:13:55.466435 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:13:57.968591 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:13:57.859025 1011955 api_server.go:279] https://192.168.72.158:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0116 03:13:57.859081 1011955 api_server.go:103] status: https://192.168.72.158:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0116 03:13:57.859100 1011955 api_server.go:253] Checking apiserver healthz at https://192.168.72.158:8444/healthz ...
	I0116 03:13:57.949519 1011955 api_server.go:279] https://192.168.72.158:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0116 03:13:57.949575 1011955 api_server.go:103] status: https://192.168.72.158:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0116 03:13:57.949623 1011955 api_server.go:253] Checking apiserver healthz at https://192.168.72.158:8444/healthz ...
	I0116 03:13:57.965508 1011955 api_server.go:279] https://192.168.72.158:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0116 03:13:57.965553 1011955 api_server.go:103] status: https://192.168.72.158:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0116 03:13:58.449680 1011955 api_server.go:253] Checking apiserver healthz at https://192.168.72.158:8444/healthz ...
	I0116 03:13:58.456250 1011955 api_server.go:279] https://192.168.72.158:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0116 03:13:58.456292 1011955 api_server.go:103] status: https://192.168.72.158:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0116 03:13:58.950052 1011955 api_server.go:253] Checking apiserver healthz at https://192.168.72.158:8444/healthz ...
	I0116 03:13:58.962965 1011955 api_server.go:279] https://192.168.72.158:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0116 03:13:58.963019 1011955 api_server.go:103] status: https://192.168.72.158:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0116 03:13:59.449560 1011955 api_server.go:253] Checking apiserver healthz at https://192.168.72.158:8444/healthz ...
	I0116 03:13:59.457086 1011955 api_server.go:279] https://192.168.72.158:8444/healthz returned 200:
	ok
	I0116 03:13:59.469254 1011955 api_server.go:141] control plane version: v1.28.4
	I0116 03:13:59.469294 1011955 api_server.go:131] duration metric: took 5.520295477s to wait for apiserver health ...
	I0116 03:13:59.469308 1011955 cni.go:84] Creating CNI manager for ""
	I0116 03:13:59.469316 1011955 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 03:13:59.471524 1011955 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0116 03:13:56.143871 1011681 retry.go:31] will retry after 12.777471538s: kubelet not initialised
	I0116 03:13:59.472896 1011955 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0116 03:13:59.486944 1011955 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0116 03:13:59.511553 1011955 system_pods.go:43] waiting for kube-system pods to appear ...
	I0116 03:13:59.530287 1011955 system_pods.go:59] 8 kube-system pods found
	I0116 03:13:59.530357 1011955 system_pods.go:61] "coredns-5dd5756b68-z7b9d" [735c028e-f6a8-4a96-a615-95befe445a97] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0116 03:13:59.530374 1011955 system_pods.go:61] "etcd-default-k8s-diff-port-775571" [3e321076-74dd-49a8-b078-4f63505b5783] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0116 03:13:59.530391 1011955 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-775571" [07f01ea4-0317-4d3d-a03c-7c1756a5746c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0116 03:13:59.530409 1011955 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-775571" [5d4f4ee1-1f7c-4dfc-8c85-daca7a2d9fc0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0116 03:13:59.530428 1011955 system_pods.go:61] "kube-proxy-lntj2" [946acb12-217d-42e6-bcfc-37dca684b638] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0116 03:13:59.530437 1011955 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-775571" [6b278ad1-d59e-4b81-a4ec-cde1b643bb90] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0116 03:13:59.530449 1011955 system_pods.go:61] "metrics-server-57f55c9bc5-9bsqm" [ef0830b9-7e34-4aab-a1a6-8f91881b6934] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:13:59.530460 1011955 system_pods.go:61] "storage-provisioner" [8b20335e-7293-48bd-99f6-987cd95a0dc2] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0116 03:13:59.530474 1011955 system_pods.go:74] duration metric: took 18.829356ms to wait for pod list to return data ...
	I0116 03:13:59.530483 1011955 node_conditions.go:102] verifying NodePressure condition ...
	I0116 03:13:59.535596 1011955 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 03:13:59.535637 1011955 node_conditions.go:123] node cpu capacity is 2
	I0116 03:13:59.535651 1011955 node_conditions.go:105] duration metric: took 5.161567ms to run NodePressure ...
	I0116 03:13:59.535675 1011955 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:14:00.026516 1011955 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0116 03:14:00.035093 1011955 kubeadm.go:787] kubelet initialised
	I0116 03:14:00.035126 1011955 kubeadm.go:788] duration metric: took 8.522284ms waiting for restarted kubelet to initialise ...
	I0116 03:14:00.035137 1011955 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:14:00.067410 1011955 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-z7b9d" in "kube-system" namespace to be "Ready" ...
	I0116 03:13:58.094229 1011460 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (4.076000974s)
	I0116 03:13:58.094289 1011460 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (4.075931984s)
	I0116 03:13:58.094310 1011460 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2 (exists)
	I0116 03:13:58.094313 1011460 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17967-971255/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I0116 03:13:58.094331 1011460 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (4.007198419s)
	I0116 03:13:58.094353 1011460 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0116 03:13:58.094364 1011460 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0116 03:13:58.094367 1011460 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1: (4.007202527s)
	I0116 03:13:58.094384 1011460 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0116 03:13:58.094406 1011460 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (4.007194547s)
	I0116 03:13:58.094462 1011460 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2 (exists)
	I0116 03:13:58.094412 1011460 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0116 03:14:01.772635 1011460 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (3.678136161s)
	I0116 03:14:01.772673 1011460 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17967-971255/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 from cache
	I0116 03:14:01.772705 1011460 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0116 03:14:01.772758 1011460 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0116 03:14:00.463370 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:02.471583 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:02.075650 1011955 pod_ready.go:102] pod "coredns-5dd5756b68-z7b9d" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:04.077051 1011955 pod_ready.go:102] pod "coredns-5dd5756b68-z7b9d" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:04.575569 1011955 pod_ready.go:92] pod "coredns-5dd5756b68-z7b9d" in "kube-system" namespace has status "Ready":"True"
	I0116 03:14:04.575601 1011955 pod_ready.go:81] duration metric: took 4.508014187s waiting for pod "coredns-5dd5756b68-z7b9d" in "kube-system" namespace to be "Ready" ...
	I0116 03:14:04.575613 1011955 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-775571" in "kube-system" namespace to be "Ready" ...
	I0116 03:14:03.238654 1011460 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (1.465862156s)
	I0116 03:14:03.238716 1011460 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17967-971255/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 from cache
	I0116 03:14:03.238745 1011460 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0116 03:14:03.238799 1011460 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0116 03:14:05.517213 1011460 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (2.278362381s)
	I0116 03:14:05.517256 1011460 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17967-971255/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 from cache
	I0116 03:14:05.517290 1011460 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0116 03:14:05.517354 1011460 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0116 03:14:06.265419 1011460 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17967-971255/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0116 03:14:06.265468 1011460 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0116 03:14:06.265522 1011460 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0116 03:14:04.544905 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:06.964607 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:08.928050 1011681 retry.go:31] will retry after 7.799067246s: kubelet not initialised
	I0116 03:14:06.583214 1011955 pod_ready.go:102] pod "etcd-default-k8s-diff-port-775571" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:08.584517 1011955 pod_ready.go:102] pod "etcd-default-k8s-diff-port-775571" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:08.427431 1011460 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.161882333s)
	I0116 03:14:08.427460 1011460 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17967-971255/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0116 03:14:08.427485 1011460 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0116 03:14:08.427533 1011460 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0116 03:14:10.992767 1011460 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (2.565203793s)
	I0116 03:14:10.992809 1011460 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17967-971255/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 from cache
	I0116 03:14:10.992842 1011460 cache_images.go:123] Successfully loaded all cached images
	I0116 03:14:10.992849 1011460 cache_images.go:92] LoadImages completed in 17.624696262s
	I0116 03:14:10.992918 1011460 ssh_runner.go:195] Run: crio config
	I0116 03:14:11.057517 1011460 cni.go:84] Creating CNI manager for ""
	I0116 03:14:11.057552 1011460 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 03:14:11.057583 1011460 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0116 03:14:11.057614 1011460 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.29 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-934668 NodeName:no-preload-934668 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.29"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.29 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0116 03:14:11.057793 1011460 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.29
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-934668"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.29
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.29"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0116 03:14:11.057907 1011460 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-934668 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.29
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-934668 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0116 03:14:11.057969 1011460 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0116 03:14:11.070793 1011460 binaries.go:44] Found k8s binaries, skipping transfer
	I0116 03:14:11.070892 1011460 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0116 03:14:11.082832 1011460 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (381 bytes)
	I0116 03:14:11.103800 1011460 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0116 03:14:11.121508 1011460 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2106 bytes)
	I0116 03:14:11.139941 1011460 ssh_runner.go:195] Run: grep 192.168.50.29	control-plane.minikube.internal$ /etc/hosts
	I0116 03:14:11.144648 1011460 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.29	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 03:14:11.160034 1011460 certs.go:56] Setting up /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/no-preload-934668 for IP: 192.168.50.29
	I0116 03:14:11.160079 1011460 certs.go:190] acquiring lock for shared ca certs: {Name:mk7c5920652340a030ac1e36e0fe21cc8437c491 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:14:11.160310 1011460 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17967-971255/.minikube/ca.key
	I0116 03:14:11.160371 1011460 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17967-971255/.minikube/proxy-client-ca.key
	I0116 03:14:11.160469 1011460 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/no-preload-934668/client.key
	I0116 03:14:11.160562 1011460 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/no-preload-934668/apiserver.key.1326a2fe
	I0116 03:14:11.160631 1011460 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/no-preload-934668/proxy-client.key
	I0116 03:14:11.160780 1011460 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/home/jenkins/minikube-integration/17967-971255/.minikube/certs/978482.pem (1338 bytes)
	W0116 03:14:11.160861 1011460 certs.go:433] ignoring /home/jenkins/minikube-integration/17967-971255/.minikube/certs/home/jenkins/minikube-integration/17967-971255/.minikube/certs/978482_empty.pem, impossibly tiny 0 bytes
	I0116 03:14:11.160887 1011460 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca-key.pem (1675 bytes)
	I0116 03:14:11.160927 1011460 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca.pem (1082 bytes)
	I0116 03:14:11.160976 1011460 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/home/jenkins/minikube-integration/17967-971255/.minikube/certs/cert.pem (1123 bytes)
	I0116 03:14:11.161008 1011460 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/home/jenkins/minikube-integration/17967-971255/.minikube/certs/key.pem (1675 bytes)
	I0116 03:14:11.161070 1011460 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-971255/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17967-971255/.minikube/files/etc/ssl/certs/9784822.pem (1708 bytes)
	I0116 03:14:11.161922 1011460 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/no-preload-934668/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0116 03:14:11.192041 1011460 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/no-preload-934668/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0116 03:14:11.217326 1011460 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/no-preload-934668/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0116 03:14:11.243091 1011460 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/no-preload-934668/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0116 03:14:11.268536 1011460 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0116 03:14:11.291985 1011460 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0116 03:14:11.317943 1011460 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0116 03:14:11.343359 1011460 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0116 03:14:11.368837 1011460 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/files/etc/ssl/certs/9784822.pem --> /usr/share/ca-certificates/9784822.pem (1708 bytes)
	I0116 03:14:11.392907 1011460 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0116 03:14:11.417266 1011460 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/certs/978482.pem --> /usr/share/ca-certificates/978482.pem (1338 bytes)
	I0116 03:14:11.441365 1011460 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0116 03:14:11.459961 1011460 ssh_runner.go:195] Run: openssl version
	I0116 03:14:11.466850 1011460 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9784822.pem && ln -fs /usr/share/ca-certificates/9784822.pem /etc/ssl/certs/9784822.pem"
	I0116 03:14:11.477985 1011460 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9784822.pem
	I0116 03:14:11.483233 1011460 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 16 02:10 /usr/share/ca-certificates/9784822.pem
	I0116 03:14:11.483296 1011460 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9784822.pem
	I0116 03:14:11.489111 1011460 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9784822.pem /etc/ssl/certs/3ec20f2e.0"
	I0116 03:14:11.500499 1011460 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0116 03:14:11.511988 1011460 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:14:11.517205 1011460 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 16 02:01 /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:14:11.517300 1011460 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:14:11.523361 1011460 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0116 03:14:11.536305 1011460 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/978482.pem && ln -fs /usr/share/ca-certificates/978482.pem /etc/ssl/certs/978482.pem"
	I0116 03:14:11.549308 1011460 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/978482.pem
	I0116 03:14:11.554540 1011460 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 16 02:10 /usr/share/ca-certificates/978482.pem
	I0116 03:14:11.554632 1011460 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/978482.pem
	I0116 03:14:11.560816 1011460 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/978482.pem /etc/ssl/certs/51391683.0"
	I0116 03:14:11.573145 1011460 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0116 03:14:11.578678 1011460 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0116 03:14:11.586807 1011460 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0116 03:14:11.593146 1011460 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0116 03:14:11.599812 1011460 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0116 03:14:11.606216 1011460 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0116 03:14:11.612827 1011460 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0116 03:14:11.619060 1011460 kubeadm.go:404] StartCluster: {Name:no-preload-934668 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.29.0-rc.2 ClusterName:no-preload-934668 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.29 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 03:14:11.619201 1011460 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0116 03:14:11.619271 1011460 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 03:14:11.661293 1011460 cri.go:89] found id: ""
	I0116 03:14:11.661390 1011460 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0116 03:14:11.672886 1011460 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0116 03:14:11.672921 1011460 kubeadm.go:636] restartCluster start
	I0116 03:14:11.672998 1011460 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0116 03:14:11.683692 1011460 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:14:11.684896 1011460 kubeconfig.go:92] found "no-preload-934668" server: "https://192.168.50.29:8443"
	I0116 03:14:11.687623 1011460 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0116 03:14:11.698887 1011460 api_server.go:166] Checking apiserver status ...
	I0116 03:14:11.698967 1011460 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:14:11.711969 1011460 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:14:12.199181 1011460 api_server.go:166] Checking apiserver status ...
	I0116 03:14:12.199277 1011460 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:14:12.213324 1011460 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:14:09.463196 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:11.464458 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:13.466325 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:10.585205 1011955 pod_ready.go:102] pod "etcd-default-k8s-diff-port-775571" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:12.585027 1011955 pod_ready.go:92] pod "etcd-default-k8s-diff-port-775571" in "kube-system" namespace has status "Ready":"True"
	I0116 03:14:12.585060 1011955 pod_ready.go:81] duration metric: took 8.009439483s waiting for pod "etcd-default-k8s-diff-port-775571" in "kube-system" namespace to be "Ready" ...
	I0116 03:14:12.585074 1011955 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-775571" in "kube-system" namespace to be "Ready" ...
	I0116 03:14:12.592172 1011955 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-775571" in "kube-system" namespace has status "Ready":"True"
	I0116 03:14:12.592208 1011955 pod_ready.go:81] duration metric: took 7.125355ms waiting for pod "kube-apiserver-default-k8s-diff-port-775571" in "kube-system" namespace to be "Ready" ...
	I0116 03:14:12.592224 1011955 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-775571" in "kube-system" namespace to be "Ready" ...
	I0116 03:14:12.600113 1011955 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-775571" in "kube-system" namespace has status "Ready":"True"
	I0116 03:14:12.600141 1011955 pod_ready.go:81] duration metric: took 7.90138ms waiting for pod "kube-controller-manager-default-k8s-diff-port-775571" in "kube-system" namespace to be "Ready" ...
	I0116 03:14:12.600152 1011955 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-lntj2" in "kube-system" namespace to be "Ready" ...
	I0116 03:14:12.606813 1011955 pod_ready.go:92] pod "kube-proxy-lntj2" in "kube-system" namespace has status "Ready":"True"
	I0116 03:14:12.606843 1011955 pod_ready.go:81] duration metric: took 6.6848ms waiting for pod "kube-proxy-lntj2" in "kube-system" namespace to be "Ready" ...
	I0116 03:14:12.606852 1011955 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-775571" in "kube-system" namespace to be "Ready" ...
	I0116 03:14:14.115221 1011955 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-775571" in "kube-system" namespace has status "Ready":"True"
	I0116 03:14:14.115256 1011955 pod_ready.go:81] duration metric: took 1.508396572s waiting for pod "kube-scheduler-default-k8s-diff-port-775571" in "kube-system" namespace to be "Ready" ...
	I0116 03:14:14.115272 1011955 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace to be "Ready" ...
	I0116 03:14:12.699849 1011460 api_server.go:166] Checking apiserver status ...
	I0116 03:14:12.700002 1011460 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:14:12.713330 1011460 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:14:13.199827 1011460 api_server.go:166] Checking apiserver status ...
	I0116 03:14:13.199938 1011460 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:14:13.212593 1011460 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:14:13.699177 1011460 api_server.go:166] Checking apiserver status ...
	I0116 03:14:13.699280 1011460 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:14:13.713754 1011460 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:14:14.199293 1011460 api_server.go:166] Checking apiserver status ...
	I0116 03:14:14.199387 1011460 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:14:14.211364 1011460 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:14:14.699976 1011460 api_server.go:166] Checking apiserver status ...
	I0116 03:14:14.700082 1011460 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:14:14.713420 1011460 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:14:15.198943 1011460 api_server.go:166] Checking apiserver status ...
	I0116 03:14:15.199056 1011460 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:14:15.211474 1011460 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:14:15.699723 1011460 api_server.go:166] Checking apiserver status ...
	I0116 03:14:15.699858 1011460 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:14:15.711566 1011460 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:14:16.199077 1011460 api_server.go:166] Checking apiserver status ...
	I0116 03:14:16.199195 1011460 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:14:16.210174 1011460 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:14:16.699188 1011460 api_server.go:166] Checking apiserver status ...
	I0116 03:14:16.699296 1011460 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:14:16.710971 1011460 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:14:17.199584 1011460 api_server.go:166] Checking apiserver status ...
	I0116 03:14:17.199733 1011460 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:14:17.211935 1011460 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:14:15.964130 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:18.463789 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:16.731737 1011681 kubeadm.go:787] kubelet initialised
	I0116 03:14:16.731763 1011681 kubeadm.go:788] duration metric: took 34.810672543s waiting for restarted kubelet to initialise ...
	I0116 03:14:16.731771 1011681 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:14:16.736630 1011681 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-5j7ps" in "kube-system" namespace to be "Ready" ...
	I0116 03:14:16.742482 1011681 pod_ready.go:92] pod "coredns-5644d7b6d9-5j7ps" in "kube-system" namespace has status "Ready":"True"
	I0116 03:14:16.742513 1011681 pod_ready.go:81] duration metric: took 5.851753ms waiting for pod "coredns-5644d7b6d9-5j7ps" in "kube-system" namespace to be "Ready" ...
	I0116 03:14:16.742524 1011681 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-dfsf5" in "kube-system" namespace to be "Ready" ...
	I0116 03:14:16.747113 1011681 pod_ready.go:92] pod "coredns-5644d7b6d9-dfsf5" in "kube-system" namespace has status "Ready":"True"
	I0116 03:14:16.747137 1011681 pod_ready.go:81] duration metric: took 4.606585ms waiting for pod "coredns-5644d7b6d9-dfsf5" in "kube-system" namespace to be "Ready" ...
	I0116 03:14:16.747146 1011681 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-788237" in "kube-system" namespace to be "Ready" ...
	I0116 03:14:16.752744 1011681 pod_ready.go:92] pod "etcd-old-k8s-version-788237" in "kube-system" namespace has status "Ready":"True"
	I0116 03:14:16.752780 1011681 pod_ready.go:81] duration metric: took 5.626197ms waiting for pod "etcd-old-k8s-version-788237" in "kube-system" namespace to be "Ready" ...
	I0116 03:14:16.752794 1011681 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-788237" in "kube-system" namespace to be "Ready" ...
	I0116 03:14:16.757419 1011681 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-788237" in "kube-system" namespace has status "Ready":"True"
	I0116 03:14:16.757453 1011681 pod_ready.go:81] duration metric: took 4.649381ms waiting for pod "kube-apiserver-old-k8s-version-788237" in "kube-system" namespace to be "Ready" ...
	I0116 03:14:16.757468 1011681 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-788237" in "kube-system" namespace to be "Ready" ...
	I0116 03:14:17.131588 1011681 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-788237" in "kube-system" namespace has status "Ready":"True"
	I0116 03:14:17.131616 1011681 pod_ready.go:81] duration metric: took 374.139932ms waiting for pod "kube-controller-manager-old-k8s-version-788237" in "kube-system" namespace to be "Ready" ...
	I0116 03:14:17.131626 1011681 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-vtxjk" in "kube-system" namespace to be "Ready" ...
	I0116 03:14:17.531570 1011681 pod_ready.go:92] pod "kube-proxy-vtxjk" in "kube-system" namespace has status "Ready":"True"
	I0116 03:14:17.531610 1011681 pod_ready.go:81] duration metric: took 399.976074ms waiting for pod "kube-proxy-vtxjk" in "kube-system" namespace to be "Ready" ...
	I0116 03:14:17.531625 1011681 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-788237" in "kube-system" namespace to be "Ready" ...
	I0116 03:14:17.931792 1011681 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-788237" in "kube-system" namespace has status "Ready":"True"
	I0116 03:14:17.931820 1011681 pod_ready.go:81] duration metric: took 400.186985ms waiting for pod "kube-scheduler-old-k8s-version-788237" in "kube-system" namespace to be "Ready" ...
	I0116 03:14:17.931832 1011681 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace to be "Ready" ...
	I0116 03:14:19.939055 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:16.125560 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:18.624277 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:17.699246 1011460 api_server.go:166] Checking apiserver status ...
	I0116 03:14:17.699353 1011460 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:14:17.712025 1011460 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:14:18.199655 1011460 api_server.go:166] Checking apiserver status ...
	I0116 03:14:18.199784 1011460 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:14:18.212198 1011460 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:14:18.699816 1011460 api_server.go:166] Checking apiserver status ...
	I0116 03:14:18.699906 1011460 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:14:18.713019 1011460 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:14:19.199601 1011460 api_server.go:166] Checking apiserver status ...
	I0116 03:14:19.199706 1011460 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:14:19.211380 1011460 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:14:19.698919 1011460 api_server.go:166] Checking apiserver status ...
	I0116 03:14:19.699010 1011460 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:14:19.711001 1011460 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:14:20.199588 1011460 api_server.go:166] Checking apiserver status ...
	I0116 03:14:20.199694 1011460 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:14:20.211824 1011460 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:14:20.699345 1011460 api_server.go:166] Checking apiserver status ...
	I0116 03:14:20.699455 1011460 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:14:20.711489 1011460 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:14:21.199006 1011460 api_server.go:166] Checking apiserver status ...
	I0116 03:14:21.199111 1011460 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:14:21.210606 1011460 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:14:21.699928 1011460 api_server.go:166] Checking apiserver status ...
	I0116 03:14:21.700036 1011460 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:14:21.712086 1011460 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:14:21.712119 1011460 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0116 03:14:21.712128 1011460 kubeadm.go:1135] stopping kube-system containers ...
	I0116 03:14:21.712140 1011460 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0116 03:14:21.712220 1011460 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 03:14:21.754523 1011460 cri.go:89] found id: ""
	I0116 03:14:21.754644 1011460 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0116 03:14:21.770459 1011460 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0116 03:14:21.781022 1011460 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0116 03:14:21.781090 1011460 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0116 03:14:21.790780 1011460 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0116 03:14:21.790817 1011460 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:14:21.928434 1011460 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:14:20.962684 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:23.464521 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:21.941218 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:24.440549 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:21.123377 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:23.622729 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:22.965238 1011460 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.036762464s)
	I0116 03:14:22.965272 1011460 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:14:23.176590 1011460 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:14:23.273101 1011460 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:14:23.360976 1011460 api_server.go:52] waiting for apiserver process to appear ...
	I0116 03:14:23.361080 1011460 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:14:23.861957 1011460 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:14:24.361978 1011460 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:14:24.861204 1011460 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:14:25.361957 1011460 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:14:25.861277 1011460 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:14:25.884677 1011460 api_server.go:72] duration metric: took 2.523698355s to wait for apiserver process to appear ...
	I0116 03:14:25.884716 1011460 api_server.go:88] waiting for apiserver healthz status ...
	I0116 03:14:25.884742 1011460 api_server.go:253] Checking apiserver healthz at https://192.168.50.29:8443/healthz ...
	I0116 03:14:25.885342 1011460 api_server.go:269] stopped: https://192.168.50.29:8443/healthz: Get "https://192.168.50.29:8443/healthz": dial tcp 192.168.50.29:8443: connect: connection refused
	I0116 03:14:26.385713 1011460 api_server.go:253] Checking apiserver healthz at https://192.168.50.29:8443/healthz ...
	I0116 03:14:25.963386 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:28.463102 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:26.941545 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:29.439950 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:25.624030 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:27.624836 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:30.125387 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:30.121267 1011460 api_server.go:279] https://192.168.50.29:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0116 03:14:30.121300 1011460 api_server.go:103] status: https://192.168.50.29:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0116 03:14:30.121319 1011460 api_server.go:253] Checking apiserver healthz at https://192.168.50.29:8443/healthz ...
	I0116 03:14:30.224826 1011460 api_server.go:279] https://192.168.50.29:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0116 03:14:30.224860 1011460 api_server.go:103] status: https://192.168.50.29:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0116 03:14:30.385083 1011460 api_server.go:253] Checking apiserver healthz at https://192.168.50.29:8443/healthz ...
	I0116 03:14:30.392851 1011460 api_server.go:279] https://192.168.50.29:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0116 03:14:30.392896 1011460 api_server.go:103] status: https://192.168.50.29:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0116 03:14:30.885620 1011460 api_server.go:253] Checking apiserver healthz at https://192.168.50.29:8443/healthz ...
	I0116 03:14:30.891094 1011460 api_server.go:279] https://192.168.50.29:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0116 03:14:30.891136 1011460 api_server.go:103] status: https://192.168.50.29:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0116 03:14:31.385130 1011460 api_server.go:253] Checking apiserver healthz at https://192.168.50.29:8443/healthz ...
	I0116 03:14:31.399561 1011460 api_server.go:279] https://192.168.50.29:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0116 03:14:31.399594 1011460 api_server.go:103] status: https://192.168.50.29:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0116 03:14:31.885471 1011460 api_server.go:253] Checking apiserver healthz at https://192.168.50.29:8443/healthz ...
	I0116 03:14:31.890676 1011460 api_server.go:279] https://192.168.50.29:8443/healthz returned 200:
	ok
	I0116 03:14:31.900046 1011460 api_server.go:141] control plane version: v1.29.0-rc.2
	I0116 03:14:31.900079 1011460 api_server.go:131] duration metric: took 6.015355459s to wait for apiserver health ...
	I0116 03:14:31.900104 1011460 cni.go:84] Creating CNI manager for ""
	I0116 03:14:31.900111 1011460 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 03:14:31.902248 1011460 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0116 03:14:31.903832 1011460 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0116 03:14:31.920161 1011460 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0116 03:14:31.946401 1011460 system_pods.go:43] waiting for kube-system pods to appear ...
	I0116 03:14:31.957546 1011460 system_pods.go:59] 8 kube-system pods found
	I0116 03:14:31.957594 1011460 system_pods.go:61] "coredns-76f75df574-j55q6" [b8775751-87dd-4a05-8c84-05c09c947102] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0116 03:14:31.957605 1011460 system_pods.go:61] "etcd-no-preload-934668" [3ce80d11-c902-4c1d-9e2d-a65fed4d33c3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0116 03:14:31.957618 1011460 system_pods.go:61] "kube-apiserver-no-preload-934668" [3636a336-1ff1-4482-bf8c-559f8ae04f40] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0116 03:14:31.957627 1011460 system_pods.go:61] "kube-controller-manager-no-preload-934668" [71bdeebc-ac26-43ca-bffe-0e8e97293d5f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0116 03:14:31.957635 1011460 system_pods.go:61] "kube-proxy-c56bl" [d57e14d7-5e87-469f-8819-2749b2f7b54f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0116 03:14:31.957650 1011460 system_pods.go:61] "kube-scheduler-no-preload-934668" [10c61a29-dda4-4975-b290-a337e67070e0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0116 03:14:31.957665 1011460 system_pods.go:61] "metrics-server-57f55c9bc5-lgmnp" [36a9cbc0-7644-421c-ab26-7262a295ea66] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:14:31.957677 1011460 system_pods.go:61] "storage-provisioner" [c35e3af3-b48e-4184-8c06-2bd5bbbc399e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0116 03:14:31.957688 1011460 system_pods.go:74] duration metric: took 11.2629ms to wait for pod list to return data ...
	I0116 03:14:31.957703 1011460 node_conditions.go:102] verifying NodePressure condition ...
	I0116 03:14:31.963828 1011460 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 03:14:31.963860 1011460 node_conditions.go:123] node cpu capacity is 2
	I0116 03:14:31.963871 1011460 node_conditions.go:105] duration metric: took 6.162948ms to run NodePressure ...
	I0116 03:14:31.963894 1011460 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:14:32.261460 1011460 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0116 03:14:32.268148 1011460 kubeadm.go:787] kubelet initialised
	I0116 03:14:32.268181 1011460 kubeadm.go:788] duration metric: took 6.679075ms waiting for restarted kubelet to initialise ...
	I0116 03:14:32.268197 1011460 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:14:32.273936 1011460 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-j55q6" in "kube-system" namespace to be "Ready" ...
	I0116 03:14:30.468482 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:32.967755 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:31.940340 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:34.440944 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:32.624635 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:35.124816 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:34.282691 1011460 pod_ready.go:102] pod "coredns-76f75df574-j55q6" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:35.787066 1011460 pod_ready.go:92] pod "coredns-76f75df574-j55q6" in "kube-system" namespace has status "Ready":"True"
	I0116 03:14:35.787097 1011460 pod_ready.go:81] duration metric: took 3.513129426s waiting for pod "coredns-76f75df574-j55q6" in "kube-system" namespace to be "Ready" ...
	I0116 03:14:35.787112 1011460 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-934668" in "kube-system" namespace to be "Ready" ...
	I0116 03:14:35.463919 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:37.963533 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:36.939219 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:38.939377 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:37.128157 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:39.623730 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:37.798112 1011460 pod_ready.go:102] pod "etcd-no-preload-934668" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:39.794453 1011460 pod_ready.go:92] pod "etcd-no-preload-934668" in "kube-system" namespace has status "Ready":"True"
	I0116 03:14:39.794486 1011460 pod_ready.go:81] duration metric: took 4.007365728s waiting for pod "etcd-no-preload-934668" in "kube-system" namespace to be "Ready" ...
	I0116 03:14:39.794496 1011460 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-934668" in "kube-system" namespace to be "Ready" ...
	I0116 03:14:39.799569 1011460 pod_ready.go:92] pod "kube-apiserver-no-preload-934668" in "kube-system" namespace has status "Ready":"True"
	I0116 03:14:39.799593 1011460 pod_ready.go:81] duration metric: took 5.090956ms waiting for pod "kube-apiserver-no-preload-934668" in "kube-system" namespace to be "Ready" ...
	I0116 03:14:39.799602 1011460 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-934668" in "kube-system" namespace to be "Ready" ...
	I0116 03:14:40.309705 1011460 pod_ready.go:92] pod "kube-controller-manager-no-preload-934668" in "kube-system" namespace has status "Ready":"True"
	I0116 03:14:40.309748 1011460 pod_ready.go:81] duration metric: took 510.137584ms waiting for pod "kube-controller-manager-no-preload-934668" in "kube-system" namespace to be "Ready" ...
	I0116 03:14:40.309761 1011460 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-c56bl" in "kube-system" namespace to be "Ready" ...
	I0116 03:14:40.315446 1011460 pod_ready.go:92] pod "kube-proxy-c56bl" in "kube-system" namespace has status "Ready":"True"
	I0116 03:14:40.315480 1011460 pod_ready.go:81] duration metric: took 5.710622ms waiting for pod "kube-proxy-c56bl" in "kube-system" namespace to be "Ready" ...
	I0116 03:14:40.315494 1011460 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-934668" in "kube-system" namespace to be "Ready" ...
	I0116 03:14:40.467180 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:42.964593 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:40.940105 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:43.440135 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:41.623831 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:44.128608 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:42.324063 1011460 pod_ready.go:102] pod "kube-scheduler-no-preload-934668" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:44.325488 1011460 pod_ready.go:102] pod "kube-scheduler-no-preload-934668" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:44.823767 1011460 pod_ready.go:92] pod "kube-scheduler-no-preload-934668" in "kube-system" namespace has status "Ready":"True"
	I0116 03:14:44.823802 1011460 pod_ready.go:81] duration metric: took 4.508298497s waiting for pod "kube-scheduler-no-preload-934668" in "kube-system" namespace to be "Ready" ...
	I0116 03:14:44.823818 1011460 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace to be "Ready" ...
	I0116 03:14:46.834119 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:44.967470 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:47.467233 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:45.939182 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:48.439510 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:46.623093 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:48.623452 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:49.333255 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:51.334349 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:49.962021 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:51.964770 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:50.439867 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:52.938999 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:54.939661 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:50.624537 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:52.631432 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:55.124303 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:53.334508 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:55.832976 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:53.965445 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:56.462907 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:58.463527 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:57.438920 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:59.440238 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:57.621578 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:59.625435 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:58.332671 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:00.831831 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:00.465186 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:02.965629 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:01.440271 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:03.938665 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:02.124017 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:04.623475 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:03.334393 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:05.831665 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:05.463235 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:07.467282 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:05.939523 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:07.940337 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:07.122018 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:09.128032 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:08.331820 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:10.831910 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:09.963317 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:11.966051 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:10.439441 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:12.440308 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:14.940075 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:11.626866 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:14.122414 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:13.332152 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:15.831466 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:14.462126 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:16.465823 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:16.940118 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:19.440426 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:16.124215 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:18.624377 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:17.832950 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:20.329770 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:18.962537 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:20.966990 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:23.467331 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:21.939074 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:23.939905 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:21.122701 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:23.124103 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:25.137599 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:22.332462 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:24.832064 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:25.965556 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:28.467190 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:26.440039 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:28.940196 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:27.626127 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:29.626656 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:27.335063 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:29.834492 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:30.963079 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:33.462526 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:31.441125 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:33.939106 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:32.122443 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:34.123801 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:32.332153 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:34.832479 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:35.963546 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:37.964525 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:35.939539 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:38.439743 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:36.126074 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:38.623002 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:37.332835 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:39.832398 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:40.463769 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:42.962649 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:40.441879 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:42.939722 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:41.123840 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:43.625404 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:42.331290 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:44.831904 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:46.835841 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:44.964678 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:47.462896 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:45.439209 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:47.440145 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:49.939854 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:46.123807 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:48.126826 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:49.332005 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:51.332502 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:49.464762 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:51.964049 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:51.939904 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:54.439236 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:50.623153 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:52.624345 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:54.627203 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:53.831895 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:55.832232 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:54.463030 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:56.963946 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:56.439394 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:58.939030 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:56.627957 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:59.123599 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:58.332413 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:00.332637 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:59.463703 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:01.964436 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:00.941424 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:03.439546 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:01.123729 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:03.124738 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:02.832493 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:04.832547 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:04.463420 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:06.463569 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:05.941019 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:07.944737 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:05.624443 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:08.122957 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:07.333014 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:09.832431 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:11.834194 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:08.963205 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:10.963471 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:13.463710 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:10.439631 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:12.940212 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:10.622909 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:12.627122 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:15.122958 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:14.332800 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:16.831137 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:15.466395 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:17.962126 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:15.440905 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:17.939481 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:19.939923 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:17.624106 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:19.624608 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:18.832920 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:20.833205 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:19.963345 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:22.464212 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:21.941453 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:24.440153 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:22.122244 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:24.123259 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:23.331669 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:25.331743 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:24.963259 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:26.963490 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:26.442666 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:28.939968 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:26.123378 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:28.125204 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:27.332247 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:29.831956 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:28.963524 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:30.964135 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:33.462993 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:31.439282 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:33.439561 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:30.623257 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:33.123409 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:32.330980 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:34.332254 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:36.332346 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:35.463102 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:37.466011 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:35.441431 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:37.938841 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:39.939708 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:35.622848 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:37.623714 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:39.624018 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:38.333242 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:40.333759 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:39.961985 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:41.963743 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:41.940877 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:44.439855 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:42.123548 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:44.123765 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:42.831179 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:44.832125 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:46.832823 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:44.464876 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:46.963061 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:46.940520 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:49.438035 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:46.622349 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:48.626247 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:49.331443 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:51.832493 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:48.963476 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:50.963937 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:53.463054 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:51.439462 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:53.938617 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:51.124901 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:53.621994 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:53.834097 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:56.331556 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:55.464589 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:57.465198 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:55.939032 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:57.939901 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:59.940433 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:55.623283 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:58.123546 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:58.831287 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:00.833045 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:59.963001 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:02.464145 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:02.438594 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:04.439026 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:00.623369 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:03.122925 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:03.336121 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:05.832499 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:04.962987 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:06.963706 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:06.439557 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:08.440103 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:05.623650 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:08.123661 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:08.333356 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:10.832246 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:09.462321 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:11.464231 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:10.440612 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:12.939770 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:10.622705 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:13.123057 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:15.123165 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:13.330980 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:15.331911 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:13.963350 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:15.965533 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:18.464316 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:15.439711 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:17.940475 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:19.940957 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:17.124102 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:19.124940 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:17.334609 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:19.832181 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:21.834883 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:20.468955 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:22.964039 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:22.441403 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:24.938835 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:21.624672 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:24.121761 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:24.332265 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:26.332655 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:25.463695 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:27.963694 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:27.963726 1011501 pod_ready.go:81] duration metric: took 4m0.008813288s waiting for pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace to be "Ready" ...
	E0116 03:17:27.963735 1011501 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0116 03:17:27.963742 1011501 pod_ready.go:38] duration metric: took 4m3.208815045s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:17:27.963758 1011501 api_server.go:52] waiting for apiserver process to appear ...
	I0116 03:17:27.963814 1011501 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0116 03:17:27.963886 1011501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0116 03:17:28.018667 1011501 cri.go:89] found id: "42d452ff0268fa05f5c5b20a7332caca85b7aea961642568ec84158f105b568f"
	I0116 03:17:28.018693 1011501 cri.go:89] found id: ""
	I0116 03:17:28.018701 1011501 logs.go:284] 1 containers: [42d452ff0268fa05f5c5b20a7332caca85b7aea961642568ec84158f105b568f]
	I0116 03:17:28.018769 1011501 ssh_runner.go:195] Run: which crictl
	I0116 03:17:28.023716 1011501 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0116 03:17:28.023802 1011501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0116 03:17:28.076139 1011501 cri.go:89] found id: "36288d0c42d12139e9355a5d562d600f6065e59336e28b57558b0bf5ea3f0618"
	I0116 03:17:28.076173 1011501 cri.go:89] found id: ""
	I0116 03:17:28.076182 1011501 logs.go:284] 1 containers: [36288d0c42d12139e9355a5d562d600f6065e59336e28b57558b0bf5ea3f0618]
	I0116 03:17:28.076233 1011501 ssh_runner.go:195] Run: which crictl
	I0116 03:17:28.080954 1011501 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0116 03:17:28.081020 1011501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0116 03:17:28.126518 1011501 cri.go:89] found id: "2cc211416aab65f634f78efecbcf662c33e971f6b6f1e0fb77492c1ef8e2cf70"
	I0116 03:17:28.126544 1011501 cri.go:89] found id: ""
	I0116 03:17:28.126552 1011501 logs.go:284] 1 containers: [2cc211416aab65f634f78efecbcf662c33e971f6b6f1e0fb77492c1ef8e2cf70]
	I0116 03:17:28.126611 1011501 ssh_runner.go:195] Run: which crictl
	I0116 03:17:28.131611 1011501 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0116 03:17:28.131692 1011501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0116 03:17:28.204571 1011501 cri.go:89] found id: "ab456031061355e2ec86ee36ff52b4245b5ccd2fd1f87da0318e9bbd4ca512e8"
	I0116 03:17:28.204604 1011501 cri.go:89] found id: ""
	I0116 03:17:28.204612 1011501 logs.go:284] 1 containers: [ab456031061355e2ec86ee36ff52b4245b5ccd2fd1f87da0318e9bbd4ca512e8]
	I0116 03:17:28.204672 1011501 ssh_runner.go:195] Run: which crictl
	I0116 03:17:28.210340 1011501 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0116 03:17:28.210415 1011501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0116 03:17:28.262556 1011501 cri.go:89] found id: "da3ca3a9cda0a5d7ff284cd8e9e069e0ef17c913570e52561f8c7cb8be285047"
	I0116 03:17:28.262587 1011501 cri.go:89] found id: ""
	I0116 03:17:28.262598 1011501 logs.go:284] 1 containers: [da3ca3a9cda0a5d7ff284cd8e9e069e0ef17c913570e52561f8c7cb8be285047]
	I0116 03:17:28.262666 1011501 ssh_runner.go:195] Run: which crictl
	I0116 03:17:28.267670 1011501 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0116 03:17:28.267763 1011501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0116 03:17:28.312958 1011501 cri.go:89] found id: "f75f023773154fd3722c861e7494aad7dd9f361e17a059735fef935507a94994"
	I0116 03:17:28.312982 1011501 cri.go:89] found id: ""
	I0116 03:17:28.312990 1011501 logs.go:284] 1 containers: [f75f023773154fd3722c861e7494aad7dd9f361e17a059735fef935507a94994]
	I0116 03:17:28.313040 1011501 ssh_runner.go:195] Run: which crictl
	I0116 03:17:28.317874 1011501 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0116 03:17:28.317951 1011501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0116 03:17:28.363140 1011501 cri.go:89] found id: ""
	I0116 03:17:28.363172 1011501 logs.go:284] 0 containers: []
	W0116 03:17:28.363181 1011501 logs.go:286] No container was found matching "kindnet"
	I0116 03:17:28.363188 1011501 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0116 03:17:28.363245 1011501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0116 03:17:28.408300 1011501 cri.go:89] found id: "0f37f0f7c73398c72353bd7fa20c656f6c90dffa7c4d73d01f9c6d8804319657"
	I0116 03:17:28.408330 1011501 cri.go:89] found id: "653a87cc5b4e5bb52f728e222fc7a58f19453b0263453d6c259e81a206ffac76"
	I0116 03:17:28.408335 1011501 cri.go:89] found id: ""
	I0116 03:17:28.408342 1011501 logs.go:284] 2 containers: [0f37f0f7c73398c72353bd7fa20c656f6c90dffa7c4d73d01f9c6d8804319657 653a87cc5b4e5bb52f728e222fc7a58f19453b0263453d6c259e81a206ffac76]
	I0116 03:17:28.408406 1011501 ssh_runner.go:195] Run: which crictl
	I0116 03:17:28.413146 1011501 ssh_runner.go:195] Run: which crictl
	I0116 03:17:28.418553 1011501 logs.go:123] Gathering logs for kube-scheduler [ab456031061355e2ec86ee36ff52b4245b5ccd2fd1f87da0318e9bbd4ca512e8] ...
	I0116 03:17:28.418588 1011501 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ab456031061355e2ec86ee36ff52b4245b5ccd2fd1f87da0318e9bbd4ca512e8"
	I0116 03:17:28.466255 1011501 logs.go:123] Gathering logs for kube-proxy [da3ca3a9cda0a5d7ff284cd8e9e069e0ef17c913570e52561f8c7cb8be285047] ...
	I0116 03:17:28.466305 1011501 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 da3ca3a9cda0a5d7ff284cd8e9e069e0ef17c913570e52561f8c7cb8be285047"
	I0116 03:17:28.511913 1011501 logs.go:123] Gathering logs for storage-provisioner [0f37f0f7c73398c72353bd7fa20c656f6c90dffa7c4d73d01f9c6d8804319657] ...
	I0116 03:17:28.511954 1011501 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f37f0f7c73398c72353bd7fa20c656f6c90dffa7c4d73d01f9c6d8804319657"
	I0116 03:17:28.551053 1011501 logs.go:123] Gathering logs for dmesg ...
	I0116 03:17:28.551093 1011501 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0116 03:17:28.571627 1011501 logs.go:123] Gathering logs for kube-controller-manager [f75f023773154fd3722c861e7494aad7dd9f361e17a059735fef935507a94994] ...
	I0116 03:17:28.571663 1011501 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f75f023773154fd3722c861e7494aad7dd9f361e17a059735fef935507a94994"
	I0116 03:17:28.631193 1011501 logs.go:123] Gathering logs for storage-provisioner [653a87cc5b4e5bb52f728e222fc7a58f19453b0263453d6c259e81a206ffac76] ...
	I0116 03:17:28.631236 1011501 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 653a87cc5b4e5bb52f728e222fc7a58f19453b0263453d6c259e81a206ffac76"
	I0116 03:17:28.671010 1011501 logs.go:123] Gathering logs for CRI-O ...
	I0116 03:17:28.671047 1011501 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0116 03:17:26.940503 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:28.941291 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:26.123594 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:28.124053 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:28.341231 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:30.831479 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:29.167771 1011501 logs.go:123] Gathering logs for describe nodes ...
	I0116 03:17:29.167828 1011501 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0116 03:17:29.340535 1011501 logs.go:123] Gathering logs for etcd [36288d0c42d12139e9355a5d562d600f6065e59336e28b57558b0bf5ea3f0618] ...
	I0116 03:17:29.340574 1011501 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 36288d0c42d12139e9355a5d562d600f6065e59336e28b57558b0bf5ea3f0618"
	I0116 03:17:29.397815 1011501 logs.go:123] Gathering logs for container status ...
	I0116 03:17:29.397861 1011501 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0116 03:17:29.459355 1011501 logs.go:123] Gathering logs for kubelet ...
	I0116 03:17:29.459408 1011501 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0116 03:17:29.519244 1011501 logs.go:123] Gathering logs for kube-apiserver [42d452ff0268fa05f5c5b20a7332caca85b7aea961642568ec84158f105b568f] ...
	I0116 03:17:29.519289 1011501 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 42d452ff0268fa05f5c5b20a7332caca85b7aea961642568ec84158f105b568f"
	I0116 03:17:29.577686 1011501 logs.go:123] Gathering logs for coredns [2cc211416aab65f634f78efecbcf662c33e971f6b6f1e0fb77492c1ef8e2cf70] ...
	I0116 03:17:29.577736 1011501 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2cc211416aab65f634f78efecbcf662c33e971f6b6f1e0fb77492c1ef8e2cf70"
	I0116 03:17:32.124219 1011501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:17:32.141191 1011501 api_server.go:72] duration metric: took 4m13.431910425s to wait for apiserver process to appear ...
	I0116 03:17:32.141224 1011501 api_server.go:88] waiting for apiserver healthz status ...
	I0116 03:17:32.141316 1011501 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0116 03:17:32.141397 1011501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0116 03:17:32.182105 1011501 cri.go:89] found id: "42d452ff0268fa05f5c5b20a7332caca85b7aea961642568ec84158f105b568f"
	I0116 03:17:32.182133 1011501 cri.go:89] found id: ""
	I0116 03:17:32.182142 1011501 logs.go:284] 1 containers: [42d452ff0268fa05f5c5b20a7332caca85b7aea961642568ec84158f105b568f]
	I0116 03:17:32.182200 1011501 ssh_runner.go:195] Run: which crictl
	I0116 03:17:32.186819 1011501 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0116 03:17:32.186900 1011501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0116 03:17:32.234240 1011501 cri.go:89] found id: "36288d0c42d12139e9355a5d562d600f6065e59336e28b57558b0bf5ea3f0618"
	I0116 03:17:32.234282 1011501 cri.go:89] found id: ""
	I0116 03:17:32.234294 1011501 logs.go:284] 1 containers: [36288d0c42d12139e9355a5d562d600f6065e59336e28b57558b0bf5ea3f0618]
	I0116 03:17:32.234366 1011501 ssh_runner.go:195] Run: which crictl
	I0116 03:17:32.240481 1011501 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0116 03:17:32.240550 1011501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0116 03:17:32.284981 1011501 cri.go:89] found id: "2cc211416aab65f634f78efecbcf662c33e971f6b6f1e0fb77492c1ef8e2cf70"
	I0116 03:17:32.285016 1011501 cri.go:89] found id: ""
	I0116 03:17:32.285028 1011501 logs.go:284] 1 containers: [2cc211416aab65f634f78efecbcf662c33e971f6b6f1e0fb77492c1ef8e2cf70]
	I0116 03:17:32.285095 1011501 ssh_runner.go:195] Run: which crictl
	I0116 03:17:32.289894 1011501 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0116 03:17:32.289985 1011501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0116 03:17:32.331520 1011501 cri.go:89] found id: "ab456031061355e2ec86ee36ff52b4245b5ccd2fd1f87da0318e9bbd4ca512e8"
	I0116 03:17:32.331555 1011501 cri.go:89] found id: ""
	I0116 03:17:32.331567 1011501 logs.go:284] 1 containers: [ab456031061355e2ec86ee36ff52b4245b5ccd2fd1f87da0318e9bbd4ca512e8]
	I0116 03:17:32.331646 1011501 ssh_runner.go:195] Run: which crictl
	I0116 03:17:32.336053 1011501 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0116 03:17:32.336131 1011501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0116 03:17:32.383199 1011501 cri.go:89] found id: "da3ca3a9cda0a5d7ff284cd8e9e069e0ef17c913570e52561f8c7cb8be285047"
	I0116 03:17:32.383233 1011501 cri.go:89] found id: ""
	I0116 03:17:32.383253 1011501 logs.go:284] 1 containers: [da3ca3a9cda0a5d7ff284cd8e9e069e0ef17c913570e52561f8c7cb8be285047]
	I0116 03:17:32.383324 1011501 ssh_runner.go:195] Run: which crictl
	I0116 03:17:32.388197 1011501 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0116 03:17:32.388278 1011501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0116 03:17:32.435679 1011501 cri.go:89] found id: "f75f023773154fd3722c861e7494aad7dd9f361e17a059735fef935507a94994"
	I0116 03:17:32.435711 1011501 cri.go:89] found id: ""
	I0116 03:17:32.435722 1011501 logs.go:284] 1 containers: [f75f023773154fd3722c861e7494aad7dd9f361e17a059735fef935507a94994]
	I0116 03:17:32.435795 1011501 ssh_runner.go:195] Run: which crictl
	I0116 03:17:32.441503 1011501 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0116 03:17:32.441578 1011501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0116 03:17:32.484750 1011501 cri.go:89] found id: ""
	I0116 03:17:32.484783 1011501 logs.go:284] 0 containers: []
	W0116 03:17:32.484794 1011501 logs.go:286] No container was found matching "kindnet"
	I0116 03:17:32.484803 1011501 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0116 03:17:32.484872 1011501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0116 03:17:32.534967 1011501 cri.go:89] found id: "0f37f0f7c73398c72353bd7fa20c656f6c90dffa7c4d73d01f9c6d8804319657"
	I0116 03:17:32.534996 1011501 cri.go:89] found id: "653a87cc5b4e5bb52f728e222fc7a58f19453b0263453d6c259e81a206ffac76"
	I0116 03:17:32.535002 1011501 cri.go:89] found id: ""
	I0116 03:17:32.535011 1011501 logs.go:284] 2 containers: [0f37f0f7c73398c72353bd7fa20c656f6c90dffa7c4d73d01f9c6d8804319657 653a87cc5b4e5bb52f728e222fc7a58f19453b0263453d6c259e81a206ffac76]
	I0116 03:17:32.535079 1011501 ssh_runner.go:195] Run: which crictl
	I0116 03:17:32.539828 1011501 ssh_runner.go:195] Run: which crictl
	I0116 03:17:32.544640 1011501 logs.go:123] Gathering logs for describe nodes ...
	I0116 03:17:32.544670 1011501 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0116 03:17:32.681760 1011501 logs.go:123] Gathering logs for kube-apiserver [42d452ff0268fa05f5c5b20a7332caca85b7aea961642568ec84158f105b568f] ...
	I0116 03:17:32.681831 1011501 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 42d452ff0268fa05f5c5b20a7332caca85b7aea961642568ec84158f105b568f"
	I0116 03:17:32.741557 1011501 logs.go:123] Gathering logs for container status ...
	I0116 03:17:32.741606 1011501 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0116 03:17:32.791811 1011501 logs.go:123] Gathering logs for CRI-O ...
	I0116 03:17:32.791857 1011501 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0116 03:17:33.242377 1011501 logs.go:123] Gathering logs for etcd [36288d0c42d12139e9355a5d562d600f6065e59336e28b57558b0bf5ea3f0618] ...
	I0116 03:17:33.242424 1011501 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 36288d0c42d12139e9355a5d562d600f6065e59336e28b57558b0bf5ea3f0618"
	I0116 03:17:33.303162 1011501 logs.go:123] Gathering logs for kube-proxy [da3ca3a9cda0a5d7ff284cd8e9e069e0ef17c913570e52561f8c7cb8be285047] ...
	I0116 03:17:33.303211 1011501 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 da3ca3a9cda0a5d7ff284cd8e9e069e0ef17c913570e52561f8c7cb8be285047"
	I0116 03:17:33.346935 1011501 logs.go:123] Gathering logs for storage-provisioner [653a87cc5b4e5bb52f728e222fc7a58f19453b0263453d6c259e81a206ffac76] ...
	I0116 03:17:33.346975 1011501 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 653a87cc5b4e5bb52f728e222fc7a58f19453b0263453d6c259e81a206ffac76"
	I0116 03:17:33.393563 1011501 logs.go:123] Gathering logs for kube-controller-manager [f75f023773154fd3722c861e7494aad7dd9f361e17a059735fef935507a94994] ...
	I0116 03:17:33.393603 1011501 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f75f023773154fd3722c861e7494aad7dd9f361e17a059735fef935507a94994"
	I0116 03:17:33.453859 1011501 logs.go:123] Gathering logs for storage-provisioner [0f37f0f7c73398c72353bd7fa20c656f6c90dffa7c4d73d01f9c6d8804319657] ...
	I0116 03:17:33.453902 1011501 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f37f0f7c73398c72353bd7fa20c656f6c90dffa7c4d73d01f9c6d8804319657"
	I0116 03:17:33.492763 1011501 logs.go:123] Gathering logs for kubelet ...
	I0116 03:17:33.492797 1011501 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0116 03:17:33.555700 1011501 logs.go:123] Gathering logs for coredns [2cc211416aab65f634f78efecbcf662c33e971f6b6f1e0fb77492c1ef8e2cf70] ...
	I0116 03:17:33.555742 1011501 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2cc211416aab65f634f78efecbcf662c33e971f6b6f1e0fb77492c1ef8e2cf70"
	I0116 03:17:33.601049 1011501 logs.go:123] Gathering logs for kube-scheduler [ab456031061355e2ec86ee36ff52b4245b5ccd2fd1f87da0318e9bbd4ca512e8] ...
	I0116 03:17:33.601084 1011501 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ab456031061355e2ec86ee36ff52b4245b5ccd2fd1f87da0318e9bbd4ca512e8"
	I0116 03:17:33.652000 1011501 logs.go:123] Gathering logs for dmesg ...
	I0116 03:17:33.652035 1011501 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0116 03:17:31.438487 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:33.440493 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:30.621532 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:32.622315 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:34.622840 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:32.832920 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:35.331711 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:36.168102 1011501 api_server.go:253] Checking apiserver healthz at https://192.168.61.150:8443/healthz ...
	I0116 03:17:36.173921 1011501 api_server.go:279] https://192.168.61.150:8443/healthz returned 200:
	ok
	I0116 03:17:36.175763 1011501 api_server.go:141] control plane version: v1.28.4
	I0116 03:17:36.175789 1011501 api_server.go:131] duration metric: took 4.034557823s to wait for apiserver health ...
	I0116 03:17:36.175798 1011501 system_pods.go:43] waiting for kube-system pods to appear ...
	I0116 03:17:36.175826 1011501 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0116 03:17:36.175890 1011501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0116 03:17:36.224810 1011501 cri.go:89] found id: "42d452ff0268fa05f5c5b20a7332caca85b7aea961642568ec84158f105b568f"
	I0116 03:17:36.224847 1011501 cri.go:89] found id: ""
	I0116 03:17:36.224859 1011501 logs.go:284] 1 containers: [42d452ff0268fa05f5c5b20a7332caca85b7aea961642568ec84158f105b568f]
	I0116 03:17:36.224925 1011501 ssh_runner.go:195] Run: which crictl
	I0116 03:17:36.229177 1011501 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0116 03:17:36.229255 1011501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0116 03:17:36.271241 1011501 cri.go:89] found id: "36288d0c42d12139e9355a5d562d600f6065e59336e28b57558b0bf5ea3f0618"
	I0116 03:17:36.271272 1011501 cri.go:89] found id: ""
	I0116 03:17:36.271281 1011501 logs.go:284] 1 containers: [36288d0c42d12139e9355a5d562d600f6065e59336e28b57558b0bf5ea3f0618]
	I0116 03:17:36.271342 1011501 ssh_runner.go:195] Run: which crictl
	I0116 03:17:36.275772 1011501 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0116 03:17:36.275846 1011501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0116 03:17:36.319867 1011501 cri.go:89] found id: "2cc211416aab65f634f78efecbcf662c33e971f6b6f1e0fb77492c1ef8e2cf70"
	I0116 03:17:36.319899 1011501 cri.go:89] found id: ""
	I0116 03:17:36.319909 1011501 logs.go:284] 1 containers: [2cc211416aab65f634f78efecbcf662c33e971f6b6f1e0fb77492c1ef8e2cf70]
	I0116 03:17:36.319977 1011501 ssh_runner.go:195] Run: which crictl
	I0116 03:17:36.324329 1011501 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0116 03:17:36.324410 1011501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0116 03:17:36.363526 1011501 cri.go:89] found id: "ab456031061355e2ec86ee36ff52b4245b5ccd2fd1f87da0318e9bbd4ca512e8"
	I0116 03:17:36.363551 1011501 cri.go:89] found id: ""
	I0116 03:17:36.363559 1011501 logs.go:284] 1 containers: [ab456031061355e2ec86ee36ff52b4245b5ccd2fd1f87da0318e9bbd4ca512e8]
	I0116 03:17:36.363614 1011501 ssh_runner.go:195] Run: which crictl
	I0116 03:17:36.367896 1011501 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0116 03:17:36.367974 1011501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0116 03:17:36.408601 1011501 cri.go:89] found id: "da3ca3a9cda0a5d7ff284cd8e9e069e0ef17c913570e52561f8c7cb8be285047"
	I0116 03:17:36.408642 1011501 cri.go:89] found id: ""
	I0116 03:17:36.408657 1011501 logs.go:284] 1 containers: [da3ca3a9cda0a5d7ff284cd8e9e069e0ef17c913570e52561f8c7cb8be285047]
	I0116 03:17:36.408715 1011501 ssh_runner.go:195] Run: which crictl
	I0116 03:17:36.413041 1011501 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0116 03:17:36.413111 1011501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0116 03:17:36.460091 1011501 cri.go:89] found id: "f75f023773154fd3722c861e7494aad7dd9f361e17a059735fef935507a94994"
	I0116 03:17:36.460117 1011501 cri.go:89] found id: ""
	I0116 03:17:36.460126 1011501 logs.go:284] 1 containers: [f75f023773154fd3722c861e7494aad7dd9f361e17a059735fef935507a94994]
	I0116 03:17:36.460201 1011501 ssh_runner.go:195] Run: which crictl
	I0116 03:17:36.464375 1011501 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0116 03:17:36.464457 1011501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0116 03:17:36.501943 1011501 cri.go:89] found id: ""
	I0116 03:17:36.501969 1011501 logs.go:284] 0 containers: []
	W0116 03:17:36.501977 1011501 logs.go:286] No container was found matching "kindnet"
	I0116 03:17:36.501984 1011501 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0116 03:17:36.502037 1011501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0116 03:17:36.550841 1011501 cri.go:89] found id: "0f37f0f7c73398c72353bd7fa20c656f6c90dffa7c4d73d01f9c6d8804319657"
	I0116 03:17:36.550874 1011501 cri.go:89] found id: "653a87cc5b4e5bb52f728e222fc7a58f19453b0263453d6c259e81a206ffac76"
	I0116 03:17:36.550882 1011501 cri.go:89] found id: ""
	I0116 03:17:36.550892 1011501 logs.go:284] 2 containers: [0f37f0f7c73398c72353bd7fa20c656f6c90dffa7c4d73d01f9c6d8804319657 653a87cc5b4e5bb52f728e222fc7a58f19453b0263453d6c259e81a206ffac76]
	I0116 03:17:36.550976 1011501 ssh_runner.go:195] Run: which crictl
	I0116 03:17:36.555728 1011501 ssh_runner.go:195] Run: which crictl
	I0116 03:17:36.560058 1011501 logs.go:123] Gathering logs for kube-controller-manager [f75f023773154fd3722c861e7494aad7dd9f361e17a059735fef935507a94994] ...
	I0116 03:17:36.560087 1011501 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f75f023773154fd3722c861e7494aad7dd9f361e17a059735fef935507a94994"
	I0116 03:17:36.618163 1011501 logs.go:123] Gathering logs for kubelet ...
	I0116 03:17:36.618208 1011501 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0116 03:17:36.673167 1011501 logs.go:123] Gathering logs for dmesg ...
	I0116 03:17:36.673216 1011501 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0116 03:17:36.690061 1011501 logs.go:123] Gathering logs for storage-provisioner [0f37f0f7c73398c72353bd7fa20c656f6c90dffa7c4d73d01f9c6d8804319657] ...
	I0116 03:17:36.690099 1011501 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f37f0f7c73398c72353bd7fa20c656f6c90dffa7c4d73d01f9c6d8804319657"
	I0116 03:17:36.732953 1011501 logs.go:123] Gathering logs for CRI-O ...
	I0116 03:17:36.733013 1011501 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0116 03:17:37.127465 1011501 logs.go:123] Gathering logs for container status ...
	I0116 03:17:37.127504 1011501 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0116 03:17:37.176618 1011501 logs.go:123] Gathering logs for coredns [2cc211416aab65f634f78efecbcf662c33e971f6b6f1e0fb77492c1ef8e2cf70] ...
	I0116 03:17:37.176660 1011501 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2cc211416aab65f634f78efecbcf662c33e971f6b6f1e0fb77492c1ef8e2cf70"
	I0116 03:17:37.223851 1011501 logs.go:123] Gathering logs for kube-scheduler [ab456031061355e2ec86ee36ff52b4245b5ccd2fd1f87da0318e9bbd4ca512e8] ...
	I0116 03:17:37.223895 1011501 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ab456031061355e2ec86ee36ff52b4245b5ccd2fd1f87da0318e9bbd4ca512e8"
	I0116 03:17:37.265502 1011501 logs.go:123] Gathering logs for kube-proxy [da3ca3a9cda0a5d7ff284cd8e9e069e0ef17c913570e52561f8c7cb8be285047] ...
	I0116 03:17:37.265542 1011501 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 da3ca3a9cda0a5d7ff284cd8e9e069e0ef17c913570e52561f8c7cb8be285047"
	I0116 03:17:37.323107 1011501 logs.go:123] Gathering logs for storage-provisioner [653a87cc5b4e5bb52f728e222fc7a58f19453b0263453d6c259e81a206ffac76] ...
	I0116 03:17:37.323140 1011501 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 653a87cc5b4e5bb52f728e222fc7a58f19453b0263453d6c259e81a206ffac76"
	I0116 03:17:37.368305 1011501 logs.go:123] Gathering logs for describe nodes ...
	I0116 03:17:37.368348 1011501 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0116 03:17:37.519310 1011501 logs.go:123] Gathering logs for kube-apiserver [42d452ff0268fa05f5c5b20a7332caca85b7aea961642568ec84158f105b568f] ...
	I0116 03:17:37.519352 1011501 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 42d452ff0268fa05f5c5b20a7332caca85b7aea961642568ec84158f105b568f"
	I0116 03:17:37.580961 1011501 logs.go:123] Gathering logs for etcd [36288d0c42d12139e9355a5d562d600f6065e59336e28b57558b0bf5ea3f0618] ...
	I0116 03:17:37.581000 1011501 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 36288d0c42d12139e9355a5d562d600f6065e59336e28b57558b0bf5ea3f0618"
	I0116 03:17:35.940233 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:38.439452 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:40.146809 1011501 system_pods.go:59] 8 kube-system pods found
	I0116 03:17:40.146843 1011501 system_pods.go:61] "coredns-5dd5756b68-stqh5" [adbcef96-218b-42ed-9daf-72c274be0690] Running
	I0116 03:17:40.146849 1011501 system_pods.go:61] "etcd-embed-certs-480663" [6694af11-3b1a-4b84-adcb-3416b87a076f] Running
	I0116 03:17:40.146853 1011501 system_pods.go:61] "kube-apiserver-embed-certs-480663" [3a3d0e5d-35eb-4f2f-9686-b17af19bc777] Running
	I0116 03:17:40.146857 1011501 system_pods.go:61] "kube-controller-manager-embed-certs-480663" [729b671f-23b9-409b-9f11-d6992f2355c7] Running
	I0116 03:17:40.146861 1011501 system_pods.go:61] "kube-proxy-j4786" [aabb98a7-fe55-4105-a5d2-c1e312464107] Running
	I0116 03:17:40.146865 1011501 system_pods.go:61] "kube-scheduler-embed-certs-480663" [11baa5d7-6e7b-4a25-be6f-e7975b51023d] Running
	I0116 03:17:40.146872 1011501 system_pods.go:61] "metrics-server-57f55c9bc5-7d2fh" [512cf579-f335-4995-8721-74bb84da776e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:17:40.146877 1011501 system_pods.go:61] "storage-provisioner" [da59ff59-869f-48a9-a5c5-c95bb807cbcf] Running
	I0116 03:17:40.146887 1011501 system_pods.go:74] duration metric: took 3.971081813s to wait for pod list to return data ...
	I0116 03:17:40.146900 1011501 default_sa.go:34] waiting for default service account to be created ...
	I0116 03:17:40.149755 1011501 default_sa.go:45] found service account: "default"
	I0116 03:17:40.149786 1011501 default_sa.go:55] duration metric: took 2.87163ms for default service account to be created ...
	I0116 03:17:40.149798 1011501 system_pods.go:116] waiting for k8s-apps to be running ...
	I0116 03:17:40.156300 1011501 system_pods.go:86] 8 kube-system pods found
	I0116 03:17:40.156327 1011501 system_pods.go:89] "coredns-5dd5756b68-stqh5" [adbcef96-218b-42ed-9daf-72c274be0690] Running
	I0116 03:17:40.156333 1011501 system_pods.go:89] "etcd-embed-certs-480663" [6694af11-3b1a-4b84-adcb-3416b87a076f] Running
	I0116 03:17:40.156337 1011501 system_pods.go:89] "kube-apiserver-embed-certs-480663" [3a3d0e5d-35eb-4f2f-9686-b17af19bc777] Running
	I0116 03:17:40.156341 1011501 system_pods.go:89] "kube-controller-manager-embed-certs-480663" [729b671f-23b9-409b-9f11-d6992f2355c7] Running
	I0116 03:17:40.156345 1011501 system_pods.go:89] "kube-proxy-j4786" [aabb98a7-fe55-4105-a5d2-c1e312464107] Running
	I0116 03:17:40.156349 1011501 system_pods.go:89] "kube-scheduler-embed-certs-480663" [11baa5d7-6e7b-4a25-be6f-e7975b51023d] Running
	I0116 03:17:40.156355 1011501 system_pods.go:89] "metrics-server-57f55c9bc5-7d2fh" [512cf579-f335-4995-8721-74bb84da776e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:17:40.156360 1011501 system_pods.go:89] "storage-provisioner" [da59ff59-869f-48a9-a5c5-c95bb807cbcf] Running
	I0116 03:17:40.156367 1011501 system_pods.go:126] duration metric: took 6.548782ms to wait for k8s-apps to be running ...
	I0116 03:17:40.156374 1011501 system_svc.go:44] waiting for kubelet service to be running ....
	I0116 03:17:40.156421 1011501 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 03:17:40.173539 1011501 system_svc.go:56] duration metric: took 17.152768ms WaitForService to wait for kubelet.
	I0116 03:17:40.173574 1011501 kubeadm.go:581] duration metric: took 4m21.464303041s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0116 03:17:40.173623 1011501 node_conditions.go:102] verifying NodePressure condition ...
	I0116 03:17:40.177277 1011501 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 03:17:40.177309 1011501 node_conditions.go:123] node cpu capacity is 2
	I0116 03:17:40.177324 1011501 node_conditions.go:105] duration metric: took 3.695642ms to run NodePressure ...
	I0116 03:17:40.177336 1011501 start.go:228] waiting for startup goroutines ...
	I0116 03:17:40.177342 1011501 start.go:233] waiting for cluster config update ...
	I0116 03:17:40.177353 1011501 start.go:242] writing updated cluster config ...
	I0116 03:17:40.177673 1011501 ssh_runner.go:195] Run: rm -f paused
	I0116 03:17:40.237611 1011501 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I0116 03:17:40.239605 1011501 out.go:177] * Done! kubectl is now configured to use "embed-certs-480663" cluster and "default" namespace by default
	I0116 03:17:36.624876 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:39.123549 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:37.332861 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:39.832707 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:40.440194 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:42.939505 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:41.123729 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:43.124392 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:42.335659 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:44.833290 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:45.438892 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:47.439827 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:49.440946 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:45.622763 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:47.623098 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:49.623524 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:47.331849 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:49.832349 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:51.938022 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:53.939098 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:52.122851 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:54.123517 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:52.333667 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:54.832564 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:55.939981 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:57.941055 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:56.623347 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:59.123492 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:57.332003 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:59.332838 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:18:01.333665 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:18:00.440795 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:18:02.939475 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:18:01.623191 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:18:03.623475 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:18:03.831584 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:18:05.832669 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:18:05.438818 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:18:07.940446 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:18:06.125503 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:18:08.624414 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:18:07.832961 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:18:10.332435 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:18:10.439517 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:18:12.938184 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:18:14.939116 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:18:10.626134 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:18:13.123124 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:18:14.116258 1011955 pod_ready.go:81] duration metric: took 4m0.000962112s waiting for pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace to be "Ready" ...
	E0116 03:18:14.116292 1011955 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace to be "Ready" (will not retry!)
	I0116 03:18:14.116325 1011955 pod_ready.go:38] duration metric: took 4m14.081176627s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:18:14.116391 1011955 kubeadm.go:640] restartCluster took 4m34.84299912s
	W0116 03:18:14.116515 1011955 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0116 03:18:14.116555 1011955 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0116 03:18:12.832787 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:18:14.833104 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:18:16.833154 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:18:16.939522 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:18:17.932247 1011681 pod_ready.go:81] duration metric: took 4m0.000397189s waiting for pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace to be "Ready" ...
	E0116 03:18:17.932288 1011681 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace to be "Ready" (will not retry!)
	I0116 03:18:17.932314 1011681 pod_ready.go:38] duration metric: took 4m1.200532474s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:18:17.932356 1011681 kubeadm.go:640] restartCluster took 4m59.25901651s
	W0116 03:18:17.932448 1011681 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0116 03:18:17.932484 1011681 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0116 03:18:19.332379 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:18:21.332813 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:18:24.791837 1011681 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (6.859306364s)
	I0116 03:18:24.791938 1011681 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 03:18:24.810486 1011681 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0116 03:18:24.822414 1011681 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0116 03:18:24.834751 1011681 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0116 03:18:24.834814 1011681 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0116 03:18:25.070509 1011681 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0116 03:18:23.832402 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:18:25.834563 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:18:28.584480 1011955 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (14.467896175s)
	I0116 03:18:28.584554 1011955 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 03:18:28.602324 1011955 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0116 03:18:28.614934 1011955 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0116 03:18:28.624508 1011955 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0116 03:18:28.624564 1011955 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0116 03:18:28.679880 1011955 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0116 03:18:28.679970 1011955 kubeadm.go:322] [preflight] Running pre-flight checks
	I0116 03:18:28.862872 1011955 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0116 03:18:28.862987 1011955 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0116 03:18:28.863151 1011955 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0116 03:18:29.129842 1011955 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0116 03:18:29.131728 1011955 out.go:204]   - Generating certificates and keys ...
	I0116 03:18:29.131835 1011955 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0116 03:18:29.131918 1011955 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0116 03:18:29.132072 1011955 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0116 03:18:29.132174 1011955 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0116 03:18:29.132294 1011955 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0116 03:18:29.132393 1011955 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0116 03:18:29.132472 1011955 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0116 03:18:29.132553 1011955 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0116 03:18:29.132646 1011955 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0116 03:18:29.132781 1011955 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0116 03:18:29.132867 1011955 kubeadm.go:322] [certs] Using the existing "sa" key
	I0116 03:18:29.132972 1011955 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0116 03:18:29.254715 1011955 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0116 03:18:29.440667 1011955 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0116 03:18:29.640243 1011955 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0116 03:18:29.792291 1011955 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0116 03:18:29.793072 1011955 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0116 03:18:29.799431 1011955 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0116 03:18:29.801398 1011955 out.go:204]   - Booting up control plane ...
	I0116 03:18:29.801516 1011955 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0116 03:18:29.801601 1011955 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0116 03:18:29.801686 1011955 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0116 03:18:29.820061 1011955 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0116 03:18:29.823043 1011955 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0116 03:18:29.823191 1011955 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0116 03:18:29.951227 1011955 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0116 03:18:27.835298 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:18:30.331925 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:18:32.332063 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:18:34.333064 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:18:36.833631 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:18:38.602437 1011681 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0116 03:18:38.602518 1011681 kubeadm.go:322] [preflight] Running pre-flight checks
	I0116 03:18:38.602608 1011681 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0116 03:18:38.602737 1011681 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0116 03:18:38.602861 1011681 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0116 03:18:38.602991 1011681 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0116 03:18:38.603089 1011681 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0116 03:18:38.603148 1011681 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0116 03:18:38.603223 1011681 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0116 03:18:38.604856 1011681 out.go:204]   - Generating certificates and keys ...
	I0116 03:18:38.604966 1011681 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0116 03:18:38.605046 1011681 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0116 03:18:38.605139 1011681 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0116 03:18:38.605222 1011681 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0116 03:18:38.605299 1011681 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0116 03:18:38.605359 1011681 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0116 03:18:38.605446 1011681 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0116 03:18:38.605510 1011681 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0116 03:18:38.605570 1011681 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0116 03:18:38.605629 1011681 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0116 03:18:38.605662 1011681 kubeadm.go:322] [certs] Using the existing "sa" key
	I0116 03:18:38.605707 1011681 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0116 03:18:38.605749 1011681 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0116 03:18:38.605792 1011681 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0116 03:18:38.605878 1011681 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0116 03:18:38.605964 1011681 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0116 03:18:38.606070 1011681 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0116 03:18:38.608024 1011681 out.go:204]   - Booting up control plane ...
	I0116 03:18:38.608146 1011681 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0116 03:18:38.608263 1011681 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0116 03:18:38.608375 1011681 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0116 03:18:38.608508 1011681 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0116 03:18:38.608676 1011681 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0116 03:18:38.608755 1011681 kubeadm.go:322] [apiclient] All control plane components are healthy after 10.506014 seconds
	I0116 03:18:38.608891 1011681 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0116 03:18:38.609075 1011681 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I0116 03:18:38.609173 1011681 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0116 03:18:38.609358 1011681 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-788237 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0116 03:18:38.609437 1011681 kubeadm.go:322] [bootstrap-token] Using token: ou2w4b.xm5ff9ai4zzr80lg
	I0116 03:18:38.611110 1011681 out.go:204]   - Configuring RBAC rules ...
	I0116 03:18:38.611236 1011681 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0116 03:18:38.611429 1011681 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0116 03:18:38.611590 1011681 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0116 03:18:38.611730 1011681 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0116 03:18:38.611834 1011681 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0116 03:18:38.611886 1011681 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0116 03:18:38.611942 1011681 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0116 03:18:38.611948 1011681 kubeadm.go:322] 
	I0116 03:18:38.612019 1011681 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0116 03:18:38.612024 1011681 kubeadm.go:322] 
	I0116 03:18:38.612116 1011681 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0116 03:18:38.612122 1011681 kubeadm.go:322] 
	I0116 03:18:38.612153 1011681 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0116 03:18:38.612235 1011681 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0116 03:18:38.612296 1011681 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0116 03:18:38.612302 1011681 kubeadm.go:322] 
	I0116 03:18:38.612363 1011681 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0116 03:18:38.612452 1011681 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0116 03:18:38.612535 1011681 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0116 03:18:38.612541 1011681 kubeadm.go:322] 
	I0116 03:18:38.612641 1011681 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I0116 03:18:38.612732 1011681 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0116 03:18:38.612738 1011681 kubeadm.go:322] 
	I0116 03:18:38.612838 1011681 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token ou2w4b.xm5ff9ai4zzr80lg \
	I0116 03:18:38.612975 1011681 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:2b2d63eda0b757db09586d44c0338fc83976b062d99727312f950d3846e4844b \
	I0116 03:18:38.613007 1011681 kubeadm.go:322]     --control-plane 	  
	I0116 03:18:38.613013 1011681 kubeadm.go:322] 
	I0116 03:18:38.613115 1011681 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0116 03:18:38.613122 1011681 kubeadm.go:322] 
	I0116 03:18:38.613224 1011681 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token ou2w4b.xm5ff9ai4zzr80lg \
	I0116 03:18:38.613366 1011681 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:2b2d63eda0b757db09586d44c0338fc83976b062d99727312f950d3846e4844b 
	I0116 03:18:38.613378 1011681 cni.go:84] Creating CNI manager for ""
	I0116 03:18:38.613386 1011681 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 03:18:38.615140 1011681 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0116 03:18:38.454228 1011955 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.502851 seconds
	I0116 03:18:38.454363 1011955 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0116 03:18:38.474581 1011955 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0116 03:18:39.018312 1011955 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0116 03:18:39.018620 1011955 kubeadm.go:322] [mark-control-plane] Marking the node default-k8s-diff-port-775571 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0116 03:18:39.535782 1011955 kubeadm.go:322] [bootstrap-token] Using token: 8fntor.yrfb8kfaxajcp5qt
	I0116 03:18:39.537357 1011955 out.go:204]   - Configuring RBAC rules ...
	I0116 03:18:39.537505 1011955 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0116 03:18:39.552902 1011955 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0116 03:18:39.571482 1011955 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0116 03:18:39.575866 1011955 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0116 03:18:39.581062 1011955 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0116 03:18:39.586833 1011955 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0116 03:18:39.619342 1011955 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0116 03:18:39.888315 1011955 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0116 03:18:39.966804 1011955 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0116 03:18:39.971287 1011955 kubeadm.go:322] 
	I0116 03:18:39.971371 1011955 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0116 03:18:39.971383 1011955 kubeadm.go:322] 
	I0116 03:18:39.971472 1011955 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0116 03:18:39.971482 1011955 kubeadm.go:322] 
	I0116 03:18:39.971556 1011955 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0116 03:18:39.971657 1011955 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0116 03:18:39.971750 1011955 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0116 03:18:39.971761 1011955 kubeadm.go:322] 
	I0116 03:18:39.971835 1011955 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0116 03:18:39.971846 1011955 kubeadm.go:322] 
	I0116 03:18:39.971927 1011955 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0116 03:18:39.971941 1011955 kubeadm.go:322] 
	I0116 03:18:39.971984 1011955 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0116 03:18:39.972080 1011955 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0116 03:18:39.972187 1011955 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0116 03:18:39.972199 1011955 kubeadm.go:322] 
	I0116 03:18:39.972317 1011955 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0116 03:18:39.972431 1011955 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0116 03:18:39.972450 1011955 kubeadm.go:322] 
	I0116 03:18:39.972580 1011955 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8444 --token 8fntor.yrfb8kfaxajcp5qt \
	I0116 03:18:39.972743 1011955 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:2b2d63eda0b757db09586d44c0338fc83976b062d99727312f950d3846e4844b \
	I0116 03:18:39.972782 1011955 kubeadm.go:322] 	--control-plane 
	I0116 03:18:39.972805 1011955 kubeadm.go:322] 
	I0116 03:18:39.972924 1011955 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0116 03:18:39.972942 1011955 kubeadm.go:322] 
	I0116 03:18:39.973047 1011955 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8444 --token 8fntor.yrfb8kfaxajcp5qt \
	I0116 03:18:39.973210 1011955 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:2b2d63eda0b757db09586d44c0338fc83976b062d99727312f950d3846e4844b 
	I0116 03:18:39.974532 1011955 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0116 03:18:39.974577 1011955 cni.go:84] Creating CNI manager for ""
	I0116 03:18:39.974604 1011955 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 03:18:39.976623 1011955 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0116 03:18:38.616520 1011681 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0116 03:18:38.639990 1011681 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0116 03:18:38.666967 1011681 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0116 03:18:38.667168 1011681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:38.667280 1011681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=880ec6d67bbc7fe8882aaa6543d5a07427c973fd minikube.k8s.io/name=old-k8s-version-788237 minikube.k8s.io/updated_at=2024_01_16T03_18_38_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:38.688522 1011681 ops.go:34] apiserver oom_adj: -16
	I0116 03:18:38.976096 1011681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:39.476978 1011681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:39.976086 1011681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:39.977876 1011955 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0116 03:18:40.005273 1011955 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0116 03:18:40.087713 1011955 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0116 03:18:40.087863 1011955 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:40.087863 1011955 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=880ec6d67bbc7fe8882aaa6543d5a07427c973fd minikube.k8s.io/name=default-k8s-diff-port-775571 minikube.k8s.io/updated_at=2024_01_16T03_18_40_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:40.168057 1011955 ops.go:34] apiserver oom_adj: -16
	I0116 03:18:40.492375 1011955 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:39.331115 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:18:41.332298 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:18:40.476064 1011681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:40.977085 1011681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:41.476706 1011681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:41.976429 1011681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:42.476172 1011681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:42.976176 1011681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:43.476449 1011681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:43.977056 1011681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:44.476761 1011681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:44.976151 1011681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:40.992990 1011955 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:41.492564 1011955 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:41.992578 1011955 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:42.493062 1011955 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:42.993372 1011955 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:43.493473 1011955 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:43.993319 1011955 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:44.493019 1011955 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:44.993411 1011955 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:45.492880 1011955 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:43.832198 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:18:44.824162 1011460 pod_ready.go:81] duration metric: took 4m0.000326915s waiting for pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace to be "Ready" ...
	E0116 03:18:44.824195 1011460 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace to be "Ready" (will not retry!)
	I0116 03:18:44.824281 1011460 pod_ready.go:38] duration metric: took 4m12.556069814s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:18:44.824351 1011460 kubeadm.go:640] restartCluster took 4m33.151422709s
	W0116 03:18:44.824438 1011460 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0116 03:18:44.824479 1011460 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0116 03:18:45.476629 1011681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:45.977106 1011681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:46.476146 1011681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:46.977113 1011681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:47.476693 1011681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:47.976945 1011681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:48.477170 1011681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:48.976394 1011681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:49.476848 1011681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:49.976797 1011681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:45.993346 1011955 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:46.493256 1011955 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:46.993006 1011955 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:47.492403 1011955 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:47.992813 1011955 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:48.493940 1011955 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:48.992944 1011955 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:49.493490 1011955 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:49.993389 1011955 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:50.492678 1011955 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:50.992627 1011955 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:51.493472 1011955 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:51.993052 1011955 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:52.492430 1011955 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:52.646080 1011955 kubeadm.go:1088] duration metric: took 12.558292993s to wait for elevateKubeSystemPrivileges.
	I0116 03:18:52.646138 1011955 kubeadm.go:406] StartCluster complete in 5m13.439862133s
	I0116 03:18:52.646169 1011955 settings.go:142] acquiring lock: {Name:mk39c69fb5454efd5afab021c02c3bec1f1b4e6b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:18:52.646281 1011955 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17967-971255/kubeconfig
	I0116 03:18:52.648500 1011955 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17967-971255/kubeconfig: {Name:mk5ca3018274b0a03599cf3631785c0629f81a67 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:18:52.648860 1011955 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0116 03:18:52.648869 1011955 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0116 03:18:52.648980 1011955 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-775571"
	I0116 03:18:52.649003 1011955 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-775571"
	I0116 03:18:52.649005 1011955 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-775571"
	I0116 03:18:52.649029 1011955 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-775571"
	I0116 03:18:52.649034 1011955 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-775571"
	W0116 03:18:52.649043 1011955 addons.go:243] addon metrics-server should already be in state true
	I0116 03:18:52.649114 1011955 config.go:182] Loaded profile config "default-k8s-diff-port-775571": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 03:18:52.649008 1011955 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-775571"
	I0116 03:18:52.649130 1011955 host.go:66] Checking if "default-k8s-diff-port-775571" exists ...
	W0116 03:18:52.649149 1011955 addons.go:243] addon storage-provisioner should already be in state true
	I0116 03:18:52.649212 1011955 host.go:66] Checking if "default-k8s-diff-port-775571" exists ...
	I0116 03:18:52.649529 1011955 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:18:52.649563 1011955 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:18:52.649529 1011955 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:18:52.649613 1011955 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:18:52.649660 1011955 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:18:52.649697 1011955 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:18:52.666073 1011955 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40865
	I0116 03:18:52.666727 1011955 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:18:52.666879 1011955 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41643
	I0116 03:18:52.667406 1011955 main.go:141] libmachine: Using API Version  1
	I0116 03:18:52.667435 1011955 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:18:52.667447 1011955 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:18:52.667814 1011955 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:18:52.667985 1011955 main.go:141] libmachine: Using API Version  1
	I0116 03:18:52.668015 1011955 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:18:52.668030 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetState
	I0116 03:18:52.668373 1011955 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:18:52.668745 1011955 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33307
	I0116 03:18:52.668995 1011955 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:18:52.669057 1011955 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:18:52.669205 1011955 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:18:52.669742 1011955 main.go:141] libmachine: Using API Version  1
	I0116 03:18:52.669767 1011955 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:18:52.670181 1011955 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:18:52.670725 1011955 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:18:52.670760 1011955 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:18:52.672109 1011955 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-775571"
	W0116 03:18:52.672134 1011955 addons.go:243] addon default-storageclass should already be in state true
	I0116 03:18:52.672165 1011955 host.go:66] Checking if "default-k8s-diff-port-775571" exists ...
	I0116 03:18:52.672575 1011955 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:18:52.672630 1011955 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:18:52.687775 1011955 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41925
	I0116 03:18:52.689625 1011955 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37799
	I0116 03:18:52.689778 1011955 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:18:52.690073 1011955 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:18:52.690203 1011955 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41865
	I0116 03:18:52.690460 1011955 main.go:141] libmachine: Using API Version  1
	I0116 03:18:52.690473 1011955 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:18:52.690742 1011955 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:18:52.690859 1011955 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:18:52.691055 1011955 main.go:141] libmachine: Using API Version  1
	I0116 03:18:52.691067 1011955 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:18:52.691409 1011955 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:18:52.691627 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetState
	I0116 03:18:52.692030 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetState
	I0116 03:18:52.693938 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .DriverName
	I0116 03:18:52.696389 1011955 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0116 03:18:52.694587 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .DriverName
	I0116 03:18:52.694891 1011955 main.go:141] libmachine: Using API Version  1
	I0116 03:18:52.698046 1011955 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:18:52.698164 1011955 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0116 03:18:52.698189 1011955 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0116 03:18:52.698218 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetSSHHostname
	I0116 03:18:52.700172 1011955 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 03:18:52.701996 1011955 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 03:18:52.702018 1011955 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0116 03:18:52.702043 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetSSHHostname
	I0116 03:18:52.702058 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | domain default-k8s-diff-port-775571 has defined MAC address 52:54:00:4b:bc:45 in network mk-default-k8s-diff-port-775571
	I0116 03:18:52.699885 1011955 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:18:52.702560 1011955 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:18:52.702602 1011955 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:18:52.702805 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:bc:45", ip: ""} in network mk-default-k8s-diff-port-775571: {Iface:virbr4 ExpiryTime:2024-01-16 04:13:23 +0000 UTC Type:0 Mac:52:54:00:4b:bc:45 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-775571 Clientid:01:52:54:00:4b:bc:45}
	I0116 03:18:52.702820 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | domain default-k8s-diff-port-775571 has defined IP address 192.168.72.158 and MAC address 52:54:00:4b:bc:45 in network mk-default-k8s-diff-port-775571
	I0116 03:18:52.702870 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetSSHPort
	I0116 03:18:52.703094 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetSSHKeyPath
	I0116 03:18:52.703363 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetSSHUsername
	I0116 03:18:52.703544 1011955 sshutil.go:53] new ssh client: &{IP:192.168.72.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/default-k8s-diff-port-775571/id_rsa Username:docker}
	I0116 03:18:52.705663 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | domain default-k8s-diff-port-775571 has defined MAC address 52:54:00:4b:bc:45 in network mk-default-k8s-diff-port-775571
	I0116 03:18:52.706131 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:bc:45", ip: ""} in network mk-default-k8s-diff-port-775571: {Iface:virbr4 ExpiryTime:2024-01-16 04:13:23 +0000 UTC Type:0 Mac:52:54:00:4b:bc:45 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-775571 Clientid:01:52:54:00:4b:bc:45}
	I0116 03:18:52.706164 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | domain default-k8s-diff-port-775571 has defined IP address 192.168.72.158 and MAC address 52:54:00:4b:bc:45 in network mk-default-k8s-diff-port-775571
	I0116 03:18:52.706417 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetSSHPort
	I0116 03:18:52.706587 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetSSHKeyPath
	I0116 03:18:52.706758 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetSSHUsername
	I0116 03:18:52.706916 1011955 sshutil.go:53] new ssh client: &{IP:192.168.72.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/default-k8s-diff-port-775571/id_rsa Username:docker}
	I0116 03:18:52.725464 1011955 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39049
	I0116 03:18:52.726113 1011955 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:18:52.726781 1011955 main.go:141] libmachine: Using API Version  1
	I0116 03:18:52.726824 1011955 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:18:52.727253 1011955 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:18:52.727482 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetState
	I0116 03:18:52.729485 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .DriverName
	I0116 03:18:52.729789 1011955 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0116 03:18:52.729823 1011955 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0116 03:18:52.729848 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetSSHHostname
	I0116 03:18:52.732669 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | domain default-k8s-diff-port-775571 has defined MAC address 52:54:00:4b:bc:45 in network mk-default-k8s-diff-port-775571
	I0116 03:18:52.733121 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:bc:45", ip: ""} in network mk-default-k8s-diff-port-775571: {Iface:virbr4 ExpiryTime:2024-01-16 04:13:23 +0000 UTC Type:0 Mac:52:54:00:4b:bc:45 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-775571 Clientid:01:52:54:00:4b:bc:45}
	I0116 03:18:52.733142 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | domain default-k8s-diff-port-775571 has defined IP address 192.168.72.158 and MAC address 52:54:00:4b:bc:45 in network mk-default-k8s-diff-port-775571
	I0116 03:18:52.733351 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetSSHPort
	I0116 03:18:52.733557 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetSSHKeyPath
	I0116 03:18:52.733766 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetSSHUsername
	I0116 03:18:52.733963 1011955 sshutil.go:53] new ssh client: &{IP:192.168.72.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/default-k8s-diff-port-775571/id_rsa Username:docker}
	I0116 03:18:52.873193 1011955 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0116 03:18:52.909098 1011955 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0116 03:18:52.909141 1011955 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0116 03:18:52.941709 1011955 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 03:18:52.942443 1011955 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0116 03:18:52.966702 1011955 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0116 03:18:52.966736 1011955 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0116 03:18:53.020737 1011955 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0116 03:18:53.020823 1011955 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0116 03:18:53.066186 1011955 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0116 03:18:53.170342 1011955 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-775571" context rescaled to 1 replicas
	I0116 03:18:53.170433 1011955 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.158 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0116 03:18:53.172678 1011955 out.go:177] * Verifying Kubernetes components...
	I0116 03:18:50.476090 1011681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:50.976173 1011681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:51.476673 1011681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:51.976165 1011681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:52.476238 1011681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:52.976850 1011681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:53.476943 1011681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:53.686011 1011681 kubeadm.go:1088] duration metric: took 15.018895956s to wait for elevateKubeSystemPrivileges.
	I0116 03:18:53.686052 1011681 kubeadm.go:406] StartCluster complete in 5m35.06362605s
	I0116 03:18:53.686080 1011681 settings.go:142] acquiring lock: {Name:mk39c69fb5454efd5afab021c02c3bec1f1b4e6b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:18:53.686180 1011681 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17967-971255/kubeconfig
	I0116 03:18:53.688860 1011681 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17967-971255/kubeconfig: {Name:mk5ca3018274b0a03599cf3631785c0629f81a67 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:18:53.689175 1011681 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0116 03:18:53.689247 1011681 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0116 03:18:53.689333 1011681 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-788237"
	I0116 03:18:53.689349 1011681 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-788237"
	I0116 03:18:53.689364 1011681 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-788237"
	I0116 03:18:53.689377 1011681 addons.go:234] Setting addon metrics-server=true in "old-k8s-version-788237"
	W0116 03:18:53.689389 1011681 addons.go:243] addon metrics-server should already be in state true
	I0116 03:18:53.689436 1011681 config.go:182] Loaded profile config "old-k8s-version-788237": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0116 03:18:53.689455 1011681 host.go:66] Checking if "old-k8s-version-788237" exists ...
	I0116 03:18:53.689378 1011681 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-788237"
	I0116 03:18:53.689357 1011681 addons.go:234] Setting addon storage-provisioner=true in "old-k8s-version-788237"
	W0116 03:18:53.689599 1011681 addons.go:243] addon storage-provisioner should already be in state true
	I0116 03:18:53.689645 1011681 host.go:66] Checking if "old-k8s-version-788237" exists ...
	I0116 03:18:53.689901 1011681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:18:53.689924 1011681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:18:53.689924 1011681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:18:53.689950 1011681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:18:53.690144 1011681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:18:53.690180 1011681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:18:53.711157 1011681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37709
	I0116 03:18:53.713950 1011681 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:18:53.714211 1011681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38563
	I0116 03:18:53.714552 1011681 main.go:141] libmachine: Using API Version  1
	I0116 03:18:53.714576 1011681 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:18:53.714663 1011681 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:18:53.715012 1011681 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:18:53.715181 1011681 main.go:141] libmachine: Using API Version  1
	I0116 03:18:53.715199 1011681 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:18:53.715683 1011681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:18:53.715710 1011681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:18:53.716263 1011681 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:18:53.716605 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetState
	I0116 03:18:53.720570 1011681 addons.go:234] Setting addon default-storageclass=true in "old-k8s-version-788237"
	W0116 03:18:53.720598 1011681 addons.go:243] addon default-storageclass should already be in state true
	I0116 03:18:53.720630 1011681 host.go:66] Checking if "old-k8s-version-788237" exists ...
	I0116 03:18:53.721140 1011681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:18:53.721183 1011681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:18:53.724181 1011681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40403
	I0116 03:18:53.724763 1011681 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:18:53.725334 1011681 main.go:141] libmachine: Using API Version  1
	I0116 03:18:53.725364 1011681 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:18:53.725737 1011681 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:18:53.726313 1011681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:18:53.726362 1011681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:18:53.737615 1011681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46761
	I0116 03:18:53.738167 1011681 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:18:53.738714 1011681 main.go:141] libmachine: Using API Version  1
	I0116 03:18:53.738739 1011681 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:18:53.739154 1011681 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:18:53.739431 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetState
	I0116 03:18:53.741559 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .DriverName
	I0116 03:18:53.741765 1011681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41413
	I0116 03:18:53.744019 1011681 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 03:18:53.745656 1011681 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 03:18:53.745691 1011681 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0116 03:18:53.745718 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetSSHHostname
	I0116 03:18:53.745868 1011681 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:18:53.746513 1011681 main.go:141] libmachine: Using API Version  1
	I0116 03:18:53.746535 1011681 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:18:53.746969 1011681 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:18:53.747587 1011681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:18:53.747621 1011681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:18:53.749923 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | domain old-k8s-version-788237 has defined MAC address 52:54:00:64:b7:2e in network mk-old-k8s-version-788237
	I0116 03:18:53.749959 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:b7:2e", ip: ""} in network mk-old-k8s-version-788237: {Iface:virbr3 ExpiryTime:2024-01-16 04:13:01 +0000 UTC Type:0 Mac:52:54:00:64:b7:2e Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:old-k8s-version-788237 Clientid:01:52:54:00:64:b7:2e}
	I0116 03:18:53.749982 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | domain old-k8s-version-788237 has defined IP address 192.168.39.91 and MAC address 52:54:00:64:b7:2e in network mk-old-k8s-version-788237
	I0116 03:18:53.750294 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetSSHPort
	I0116 03:18:53.750501 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetSSHKeyPath
	I0116 03:18:53.750814 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetSSHUsername
	I0116 03:18:53.751535 1011681 sshutil.go:53] new ssh client: &{IP:192.168.39.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/old-k8s-version-788237/id_rsa Username:docker}
	I0116 03:18:53.755634 1011681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43429
	I0116 03:18:53.756246 1011681 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:18:53.756894 1011681 main.go:141] libmachine: Using API Version  1
	I0116 03:18:53.756918 1011681 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:18:53.761942 1011681 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:18:53.765938 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetState
	I0116 03:18:53.769965 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .DriverName
	I0116 03:18:53.770273 1011681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40811
	I0116 03:18:53.770837 1011681 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:18:53.772568 1011681 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0116 03:18:53.771317 1011681 main.go:141] libmachine: Using API Version  1
	I0116 03:18:53.774128 1011681 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0116 03:18:53.772620 1011681 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:18:53.774150 1011681 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0116 03:18:53.774254 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetSSHHostname
	I0116 03:18:53.774578 1011681 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:18:53.775367 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetState
	I0116 03:18:53.778662 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | domain old-k8s-version-788237 has defined MAC address 52:54:00:64:b7:2e in network mk-old-k8s-version-788237
	I0116 03:18:53.778671 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .DriverName
	I0116 03:18:53.778694 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:b7:2e", ip: ""} in network mk-old-k8s-version-788237: {Iface:virbr3 ExpiryTime:2024-01-16 04:13:01 +0000 UTC Type:0 Mac:52:54:00:64:b7:2e Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:old-k8s-version-788237 Clientid:01:52:54:00:64:b7:2e}
	I0116 03:18:53.778716 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | domain old-k8s-version-788237 has defined IP address 192.168.39.91 and MAC address 52:54:00:64:b7:2e in network mk-old-k8s-version-788237
	I0116 03:18:53.781111 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetSSHPort
	I0116 03:18:53.781144 1011681 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0116 03:18:53.781161 1011681 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0116 03:18:53.781185 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetSSHHostname
	I0116 03:18:53.781359 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetSSHKeyPath
	I0116 03:18:53.781509 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetSSHUsername
	I0116 03:18:53.781647 1011681 sshutil.go:53] new ssh client: &{IP:192.168.39.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/old-k8s-version-788237/id_rsa Username:docker}
	I0116 03:18:53.784375 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | domain old-k8s-version-788237 has defined MAC address 52:54:00:64:b7:2e in network mk-old-k8s-version-788237
	I0116 03:18:53.784817 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:b7:2e", ip: ""} in network mk-old-k8s-version-788237: {Iface:virbr3 ExpiryTime:2024-01-16 04:13:01 +0000 UTC Type:0 Mac:52:54:00:64:b7:2e Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:old-k8s-version-788237 Clientid:01:52:54:00:64:b7:2e}
	I0116 03:18:53.784841 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | domain old-k8s-version-788237 has defined IP address 192.168.39.91 and MAC address 52:54:00:64:b7:2e in network mk-old-k8s-version-788237
	I0116 03:18:53.785021 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetSSHPort
	I0116 03:18:53.785248 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetSSHKeyPath
	I0116 03:18:53.785367 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetSSHUsername
	I0116 03:18:53.785586 1011681 sshutil.go:53] new ssh client: &{IP:192.168.39.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/old-k8s-version-788237/id_rsa Username:docker}
	I0116 03:18:53.920099 1011681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 03:18:53.964232 1011681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0116 03:18:53.983575 1011681 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0116 03:18:54.005702 1011681 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0116 03:18:54.005736 1011681 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0116 03:18:54.084574 1011681 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0116 03:18:54.084606 1011681 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0116 03:18:54.143597 1011681 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0116 03:18:54.143640 1011681 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0116 03:18:54.195269 1011681 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-788237" context rescaled to 1 replicas
	I0116 03:18:54.195324 1011681 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.91 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0116 03:18:54.197378 1011681 out.go:177] * Verifying Kubernetes components...
	I0116 03:18:54.198806 1011681 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 03:18:54.323439 1011681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0116 03:18:55.133484 1011681 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.169208691s)
	I0116 03:18:55.133595 1011681 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-788237" to be "Ready" ...
	I0116 03:18:55.133486 1011681 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.213323807s)
	I0116 03:18:55.133650 1011681 main.go:141] libmachine: Making call to close driver server
	I0116 03:18:55.133664 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .Close
	I0116 03:18:55.133531 1011681 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.149922539s)
	I0116 03:18:55.133873 1011681 start.go:929] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0116 03:18:55.133967 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | Closing plugin on server side
	I0116 03:18:55.133609 1011681 main.go:141] libmachine: Making call to close driver server
	I0116 03:18:55.133993 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .Close
	I0116 03:18:55.134363 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | Closing plugin on server side
	I0116 03:18:55.134402 1011681 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:18:55.134415 1011681 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:18:55.134426 1011681 main.go:141] libmachine: Making call to close driver server
	I0116 03:18:55.134439 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .Close
	I0116 03:18:55.134750 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | Closing plugin on server side
	I0116 03:18:55.134766 1011681 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:18:55.134781 1011681 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:18:55.135982 1011681 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:18:55.136002 1011681 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:18:55.136014 1011681 main.go:141] libmachine: Making call to close driver server
	I0116 03:18:55.136046 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .Close
	I0116 03:18:55.136623 1011681 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:18:55.136656 1011681 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:18:53.174208 1011955 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 03:18:54.899603 1011955 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.026351829s)
	I0116 03:18:54.899706 1011955 start.go:929] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I0116 03:18:55.340175 1011955 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.397688954s)
	I0116 03:18:55.340238 1011955 main.go:141] libmachine: Making call to close driver server
	I0116 03:18:55.340252 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .Close
	I0116 03:18:55.340413 1011955 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.398670161s)
	I0116 03:18:55.340439 1011955 main.go:141] libmachine: Making call to close driver server
	I0116 03:18:55.340449 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .Close
	I0116 03:18:55.344833 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | Closing plugin on server side
	I0116 03:18:55.344839 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | Closing plugin on server side
	I0116 03:18:55.344858 1011955 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:18:55.344858 1011955 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:18:55.344871 1011955 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:18:55.344877 1011955 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:18:55.344886 1011955 main.go:141] libmachine: Making call to close driver server
	I0116 03:18:55.344889 1011955 main.go:141] libmachine: Making call to close driver server
	I0116 03:18:55.344897 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .Close
	I0116 03:18:55.344899 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .Close
	I0116 03:18:55.345154 1011955 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:18:55.345172 1011955 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:18:55.345207 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | Closing plugin on server side
	I0116 03:18:55.345229 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | Closing plugin on server side
	I0116 03:18:55.345311 1011955 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:18:55.345328 1011955 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:18:55.411967 1011955 main.go:141] libmachine: Making call to close driver server
	I0116 03:18:55.412006 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .Close
	I0116 03:18:55.412382 1011955 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:18:55.412402 1011955 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:18:55.229555 1011681 node_ready.go:49] node "old-k8s-version-788237" has status "Ready":"True"
	I0116 03:18:55.229641 1011681 node_ready.go:38] duration metric: took 95.965741ms waiting for node "old-k8s-version-788237" to be "Ready" ...
	I0116 03:18:55.229667 1011681 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:18:55.290235 1011681 main.go:141] libmachine: Making call to close driver server
	I0116 03:18:55.290288 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .Close
	I0116 03:18:55.290652 1011681 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:18:55.290675 1011681 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:18:55.311952 1011681 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-qmzl6" in "kube-system" namespace to be "Ready" ...
	I0116 03:18:55.886230 1011681 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.562731329s)
	I0116 03:18:55.886302 1011681 main.go:141] libmachine: Making call to close driver server
	I0116 03:18:55.886324 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .Close
	I0116 03:18:55.886813 1011681 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:18:55.886840 1011681 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:18:55.886852 1011681 main.go:141] libmachine: Making call to close driver server
	I0116 03:18:55.886863 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .Close
	I0116 03:18:55.889105 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | Closing plugin on server side
	I0116 03:18:55.889151 1011681 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:18:55.889160 1011681 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:18:55.889171 1011681 addons.go:470] Verifying addon metrics-server=true in "old-k8s-version-788237"
	I0116 03:18:55.891206 1011681 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0116 03:18:55.952771 1011955 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.778522731s)
	I0116 03:18:55.952832 1011955 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-775571" to be "Ready" ...
	I0116 03:18:55.953294 1011955 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.887054667s)
	I0116 03:18:55.953343 1011955 main.go:141] libmachine: Making call to close driver server
	I0116 03:18:55.953359 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .Close
	I0116 03:18:55.956009 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | Closing plugin on server side
	I0116 03:18:55.956050 1011955 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:18:55.956072 1011955 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:18:55.956095 1011955 main.go:141] libmachine: Making call to close driver server
	I0116 03:18:55.956106 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .Close
	I0116 03:18:55.956401 1011955 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:18:55.956417 1011955 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:18:55.956428 1011955 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-775571"
	I0116 03:18:55.959261 1011955 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0116 03:18:55.893233 1011681 addons.go:505] enable addons completed in 2.203983589s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0116 03:18:57.320945 1011681 pod_ready.go:102] pod "coredns-5644d7b6d9-qmzl6" in "kube-system" namespace has status "Ready":"False"
	I0116 03:18:59.825898 1011681 pod_ready.go:102] pod "coredns-5644d7b6d9-qmzl6" in "kube-system" namespace has status "Ready":"False"
	I0116 03:18:55.960681 1011955 addons.go:505] enable addons completed in 3.311813314s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0116 03:18:55.983312 1011955 node_ready.go:49] node "default-k8s-diff-port-775571" has status "Ready":"True"
	I0116 03:18:55.983350 1011955 node_ready.go:38] duration metric: took 30.503183ms waiting for node "default-k8s-diff-port-775571" to be "Ready" ...
	I0116 03:18:55.983366 1011955 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:18:56.004432 1011955 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-mk795" in "kube-system" namespace to be "Ready" ...
	I0116 03:18:56.513965 1011955 pod_ready.go:92] pod "coredns-5dd5756b68-mk795" in "kube-system" namespace has status "Ready":"True"
	I0116 03:18:56.514083 1011955 pod_ready.go:81] duration metric: took 509.611409ms waiting for pod "coredns-5dd5756b68-mk795" in "kube-system" namespace to be "Ready" ...
	I0116 03:18:56.514148 1011955 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-775571" in "kube-system" namespace to be "Ready" ...
	I0116 03:18:56.524671 1011955 pod_ready.go:92] pod "etcd-default-k8s-diff-port-775571" in "kube-system" namespace has status "Ready":"True"
	I0116 03:18:56.524770 1011955 pod_ready.go:81] duration metric: took 10.59132ms waiting for pod "etcd-default-k8s-diff-port-775571" in "kube-system" namespace to be "Ready" ...
	I0116 03:18:56.524803 1011955 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-775571" in "kube-system" namespace to be "Ready" ...
	I0116 03:18:56.538471 1011955 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-775571" in "kube-system" namespace has status "Ready":"True"
	I0116 03:18:56.538581 1011955 pod_ready.go:81] duration metric: took 13.724762ms waiting for pod "kube-apiserver-default-k8s-diff-port-775571" in "kube-system" namespace to be "Ready" ...
	I0116 03:18:56.538616 1011955 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-775571" in "kube-system" namespace to be "Ready" ...
	I0116 03:18:56.549389 1011955 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-775571" in "kube-system" namespace has status "Ready":"True"
	I0116 03:18:56.549494 1011955 pod_ready.go:81] duration metric: took 10.835015ms waiting for pod "kube-controller-manager-default-k8s-diff-port-775571" in "kube-system" namespace to be "Ready" ...
	I0116 03:18:56.549524 1011955 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zw495" in "kube-system" namespace to be "Ready" ...
	I0116 03:18:56.757971 1011955 pod_ready.go:92] pod "kube-proxy-zw495" in "kube-system" namespace has status "Ready":"True"
	I0116 03:18:56.758009 1011955 pod_ready.go:81] duration metric: took 208.445706ms waiting for pod "kube-proxy-zw495" in "kube-system" namespace to be "Ready" ...
	I0116 03:18:56.758024 1011955 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-775571" in "kube-system" namespace to be "Ready" ...
	I0116 03:18:57.156938 1011955 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-775571" in "kube-system" namespace has status "Ready":"True"
	I0116 03:18:57.156972 1011955 pod_ready.go:81] duration metric: took 398.939705ms waiting for pod "kube-scheduler-default-k8s-diff-port-775571" in "kube-system" namespace to be "Ready" ...
	I0116 03:18:57.156983 1011955 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace to be "Ready" ...
	I0116 03:18:59.164487 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:18:59.818244 1011460 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (14.993735667s)
	I0116 03:18:59.818326 1011460 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 03:18:59.833153 1011460 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0116 03:18:59.842806 1011460 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0116 03:18:59.851950 1011460 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0116 03:18:59.852010 1011460 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0116 03:19:00.070447 1011460 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0116 03:19:00.320286 1011681 pod_ready.go:92] pod "coredns-5644d7b6d9-qmzl6" in "kube-system" namespace has status "Ready":"True"
	I0116 03:19:00.320320 1011681 pod_ready.go:81] duration metric: took 5.0083337s waiting for pod "coredns-5644d7b6d9-qmzl6" in "kube-system" namespace to be "Ready" ...
	I0116 03:19:00.320333 1011681 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tv7gz" in "kube-system" namespace to be "Ready" ...
	I0116 03:19:00.326637 1011681 pod_ready.go:92] pod "kube-proxy-tv7gz" in "kube-system" namespace has status "Ready":"True"
	I0116 03:19:00.326664 1011681 pod_ready.go:81] duration metric: took 6.322991ms waiting for pod "kube-proxy-tv7gz" in "kube-system" namespace to be "Ready" ...
	I0116 03:19:00.326677 1011681 pod_ready.go:38] duration metric: took 5.096991549s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:19:00.326699 1011681 api_server.go:52] waiting for apiserver process to appear ...
	I0116 03:19:00.326772 1011681 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:19:00.343804 1011681 api_server.go:72] duration metric: took 6.148440288s to wait for apiserver process to appear ...
	I0116 03:19:00.343832 1011681 api_server.go:88] waiting for apiserver healthz status ...
	I0116 03:19:00.343855 1011681 api_server.go:253] Checking apiserver healthz at https://192.168.39.91:8443/healthz ...
	I0116 03:19:00.351105 1011681 api_server.go:279] https://192.168.39.91:8443/healthz returned 200:
	ok
	I0116 03:19:00.352195 1011681 api_server.go:141] control plane version: v1.16.0
	I0116 03:19:00.352263 1011681 api_server.go:131] duration metric: took 8.420277ms to wait for apiserver health ...
	I0116 03:19:00.352283 1011681 system_pods.go:43] waiting for kube-system pods to appear ...
	I0116 03:19:00.361924 1011681 system_pods.go:59] 4 kube-system pods found
	I0116 03:19:00.361952 1011681 system_pods.go:61] "coredns-5644d7b6d9-qmzl6" [3e4b23ca-c18b-4158-b15b-df53326b384c] Running
	I0116 03:19:00.361957 1011681 system_pods.go:61] "kube-proxy-tv7gz" [a1bf5e59-b2ae-489c-8297-25ad7c456303] Running
	I0116 03:19:00.361963 1011681 system_pods.go:61] "metrics-server-74d5856cc6-tx8jt" [790d860a-d27f-4535-94d8-64f40cb79071] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:19:00.361968 1011681 system_pods.go:61] "storage-provisioner" [aa5fac91-1606-4716-a04a-18e9d80c926b] Running
	I0116 03:19:00.361977 1011681 system_pods.go:74] duration metric: took 9.67913ms to wait for pod list to return data ...
	I0116 03:19:00.361987 1011681 default_sa.go:34] waiting for default service account to be created ...
	I0116 03:19:00.364600 1011681 default_sa.go:45] found service account: "default"
	I0116 03:19:00.364630 1011681 default_sa.go:55] duration metric: took 2.635157ms for default service account to be created ...
	I0116 03:19:00.364642 1011681 system_pods.go:116] waiting for k8s-apps to be running ...
	I0116 03:19:00.368386 1011681 system_pods.go:86] 4 kube-system pods found
	I0116 03:19:00.368409 1011681 system_pods.go:89] "coredns-5644d7b6d9-qmzl6" [3e4b23ca-c18b-4158-b15b-df53326b384c] Running
	I0116 03:19:00.368416 1011681 system_pods.go:89] "kube-proxy-tv7gz" [a1bf5e59-b2ae-489c-8297-25ad7c456303] Running
	I0116 03:19:00.368423 1011681 system_pods.go:89] "metrics-server-74d5856cc6-tx8jt" [790d860a-d27f-4535-94d8-64f40cb79071] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:19:00.368430 1011681 system_pods.go:89] "storage-provisioner" [aa5fac91-1606-4716-a04a-18e9d80c926b] Running
	I0116 03:19:00.368454 1011681 retry.go:31] will retry after 285.445367ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0116 03:19:00.660996 1011681 system_pods.go:86] 4 kube-system pods found
	I0116 03:19:00.661033 1011681 system_pods.go:89] "coredns-5644d7b6d9-qmzl6" [3e4b23ca-c18b-4158-b15b-df53326b384c] Running
	I0116 03:19:00.661040 1011681 system_pods.go:89] "kube-proxy-tv7gz" [a1bf5e59-b2ae-489c-8297-25ad7c456303] Running
	I0116 03:19:00.661047 1011681 system_pods.go:89] "metrics-server-74d5856cc6-tx8jt" [790d860a-d27f-4535-94d8-64f40cb79071] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:19:00.661055 1011681 system_pods.go:89] "storage-provisioner" [aa5fac91-1606-4716-a04a-18e9d80c926b] Running
	I0116 03:19:00.661079 1011681 retry.go:31] will retry after 334.380732ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0116 03:19:01.000372 1011681 system_pods.go:86] 4 kube-system pods found
	I0116 03:19:01.000401 1011681 system_pods.go:89] "coredns-5644d7b6d9-qmzl6" [3e4b23ca-c18b-4158-b15b-df53326b384c] Running
	I0116 03:19:01.000407 1011681 system_pods.go:89] "kube-proxy-tv7gz" [a1bf5e59-b2ae-489c-8297-25ad7c456303] Running
	I0116 03:19:01.000413 1011681 system_pods.go:89] "metrics-server-74d5856cc6-tx8jt" [790d860a-d27f-4535-94d8-64f40cb79071] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:19:01.000418 1011681 system_pods.go:89] "storage-provisioner" [aa5fac91-1606-4716-a04a-18e9d80c926b] Running
	I0116 03:19:01.000437 1011681 retry.go:31] will retry after 432.029845ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0116 03:19:01.437761 1011681 system_pods.go:86] 4 kube-system pods found
	I0116 03:19:01.437794 1011681 system_pods.go:89] "coredns-5644d7b6d9-qmzl6" [3e4b23ca-c18b-4158-b15b-df53326b384c] Running
	I0116 03:19:01.437817 1011681 system_pods.go:89] "kube-proxy-tv7gz" [a1bf5e59-b2ae-489c-8297-25ad7c456303] Running
	I0116 03:19:01.437827 1011681 system_pods.go:89] "metrics-server-74d5856cc6-tx8jt" [790d860a-d27f-4535-94d8-64f40cb79071] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:19:01.437835 1011681 system_pods.go:89] "storage-provisioner" [aa5fac91-1606-4716-a04a-18e9d80c926b] Running
	I0116 03:19:01.437857 1011681 retry.go:31] will retry after 542.969865ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0116 03:19:01.985932 1011681 system_pods.go:86] 4 kube-system pods found
	I0116 03:19:01.985965 1011681 system_pods.go:89] "coredns-5644d7b6d9-qmzl6" [3e4b23ca-c18b-4158-b15b-df53326b384c] Running
	I0116 03:19:01.985970 1011681 system_pods.go:89] "kube-proxy-tv7gz" [a1bf5e59-b2ae-489c-8297-25ad7c456303] Running
	I0116 03:19:01.985977 1011681 system_pods.go:89] "metrics-server-74d5856cc6-tx8jt" [790d860a-d27f-4535-94d8-64f40cb79071] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:19:01.985984 1011681 system_pods.go:89] "storage-provisioner" [aa5fac91-1606-4716-a04a-18e9d80c926b] Running
	I0116 03:19:01.986006 1011681 retry.go:31] will retry after 682.538217ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0116 03:19:02.673234 1011681 system_pods.go:86] 4 kube-system pods found
	I0116 03:19:02.673268 1011681 system_pods.go:89] "coredns-5644d7b6d9-qmzl6" [3e4b23ca-c18b-4158-b15b-df53326b384c] Running
	I0116 03:19:02.673274 1011681 system_pods.go:89] "kube-proxy-tv7gz" [a1bf5e59-b2ae-489c-8297-25ad7c456303] Running
	I0116 03:19:02.673280 1011681 system_pods.go:89] "metrics-server-74d5856cc6-tx8jt" [790d860a-d27f-4535-94d8-64f40cb79071] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:19:02.673286 1011681 system_pods.go:89] "storage-provisioner" [aa5fac91-1606-4716-a04a-18e9d80c926b] Running
	I0116 03:19:02.673305 1011681 retry.go:31] will retry after 865.818681ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0116 03:19:03.544313 1011681 system_pods.go:86] 4 kube-system pods found
	I0116 03:19:03.544355 1011681 system_pods.go:89] "coredns-5644d7b6d9-qmzl6" [3e4b23ca-c18b-4158-b15b-df53326b384c] Running
	I0116 03:19:03.544363 1011681 system_pods.go:89] "kube-proxy-tv7gz" [a1bf5e59-b2ae-489c-8297-25ad7c456303] Running
	I0116 03:19:03.544373 1011681 system_pods.go:89] "metrics-server-74d5856cc6-tx8jt" [790d860a-d27f-4535-94d8-64f40cb79071] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:19:03.544383 1011681 system_pods.go:89] "storage-provisioner" [aa5fac91-1606-4716-a04a-18e9d80c926b] Running
	I0116 03:19:03.544407 1011681 retry.go:31] will retry after 754.732197ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0116 03:19:04.304165 1011681 system_pods.go:86] 4 kube-system pods found
	I0116 03:19:04.304205 1011681 system_pods.go:89] "coredns-5644d7b6d9-qmzl6" [3e4b23ca-c18b-4158-b15b-df53326b384c] Running
	I0116 03:19:04.304217 1011681 system_pods.go:89] "kube-proxy-tv7gz" [a1bf5e59-b2ae-489c-8297-25ad7c456303] Running
	I0116 03:19:04.304227 1011681 system_pods.go:89] "metrics-server-74d5856cc6-tx8jt" [790d860a-d27f-4535-94d8-64f40cb79071] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:19:04.304235 1011681 system_pods.go:89] "storage-provisioner" [aa5fac91-1606-4716-a04a-18e9d80c926b] Running
	I0116 03:19:04.304258 1011681 retry.go:31] will retry after 1.101452697s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0116 03:19:01.164856 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:19:03.165951 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:19:05.166097 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:19:05.411683 1011681 system_pods.go:86] 4 kube-system pods found
	I0116 03:19:05.411726 1011681 system_pods.go:89] "coredns-5644d7b6d9-qmzl6" [3e4b23ca-c18b-4158-b15b-df53326b384c] Running
	I0116 03:19:05.411736 1011681 system_pods.go:89] "kube-proxy-tv7gz" [a1bf5e59-b2ae-489c-8297-25ad7c456303] Running
	I0116 03:19:05.411750 1011681 system_pods.go:89] "metrics-server-74d5856cc6-tx8jt" [790d860a-d27f-4535-94d8-64f40cb79071] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:19:05.411758 1011681 system_pods.go:89] "storage-provisioner" [aa5fac91-1606-4716-a04a-18e9d80c926b] Running
	I0116 03:19:05.411781 1011681 retry.go:31] will retry after 1.524854445s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0116 03:19:06.941891 1011681 system_pods.go:86] 4 kube-system pods found
	I0116 03:19:06.941929 1011681 system_pods.go:89] "coredns-5644d7b6d9-qmzl6" [3e4b23ca-c18b-4158-b15b-df53326b384c] Running
	I0116 03:19:06.941939 1011681 system_pods.go:89] "kube-proxy-tv7gz" [a1bf5e59-b2ae-489c-8297-25ad7c456303] Running
	I0116 03:19:06.941949 1011681 system_pods.go:89] "metrics-server-74d5856cc6-tx8jt" [790d860a-d27f-4535-94d8-64f40cb79071] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:19:06.941957 1011681 system_pods.go:89] "storage-provisioner" [aa5fac91-1606-4716-a04a-18e9d80c926b] Running
	I0116 03:19:06.941984 1011681 retry.go:31] will retry after 1.460454781s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0116 03:19:08.408630 1011681 system_pods.go:86] 4 kube-system pods found
	I0116 03:19:08.408662 1011681 system_pods.go:89] "coredns-5644d7b6d9-qmzl6" [3e4b23ca-c18b-4158-b15b-df53326b384c] Running
	I0116 03:19:08.408668 1011681 system_pods.go:89] "kube-proxy-tv7gz" [a1bf5e59-b2ae-489c-8297-25ad7c456303] Running
	I0116 03:19:08.408687 1011681 system_pods.go:89] "metrics-server-74d5856cc6-tx8jt" [790d860a-d27f-4535-94d8-64f40cb79071] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:19:08.408692 1011681 system_pods.go:89] "storage-provisioner" [aa5fac91-1606-4716-a04a-18e9d80c926b] Running
	I0116 03:19:08.408713 1011681 retry.go:31] will retry after 1.769662932s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0116 03:19:10.184053 1011681 system_pods.go:86] 4 kube-system pods found
	I0116 03:19:10.184081 1011681 system_pods.go:89] "coredns-5644d7b6d9-qmzl6" [3e4b23ca-c18b-4158-b15b-df53326b384c] Running
	I0116 03:19:10.184086 1011681 system_pods.go:89] "kube-proxy-tv7gz" [a1bf5e59-b2ae-489c-8297-25ad7c456303] Running
	I0116 03:19:10.184093 1011681 system_pods.go:89] "metrics-server-74d5856cc6-tx8jt" [790d860a-d27f-4535-94d8-64f40cb79071] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:19:10.184098 1011681 system_pods.go:89] "storage-provisioner" [aa5fac91-1606-4716-a04a-18e9d80c926b] Running
	I0116 03:19:10.184117 1011681 retry.go:31] will retry after 3.059139s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0116 03:19:07.169102 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:19:09.666541 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:19:11.938237 1011460 kubeadm.go:322] [init] Using Kubernetes version: v1.29.0-rc.2
	I0116 03:19:11.938354 1011460 kubeadm.go:322] [preflight] Running pre-flight checks
	I0116 03:19:11.938572 1011460 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0116 03:19:11.939095 1011460 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0116 03:19:11.939269 1011460 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0116 03:19:11.939370 1011460 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0116 03:19:11.941237 1011460 out.go:204]   - Generating certificates and keys ...
	I0116 03:19:11.941348 1011460 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0116 03:19:11.941482 1011460 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0116 03:19:11.941579 1011460 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0116 03:19:11.941646 1011460 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0116 03:19:11.941733 1011460 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0116 03:19:11.941821 1011460 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0116 03:19:11.941908 1011460 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0116 03:19:11.941959 1011460 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0116 03:19:11.942018 1011460 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0116 03:19:11.942114 1011460 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0116 03:19:11.942208 1011460 kubeadm.go:322] [certs] Using the existing "sa" key
	I0116 03:19:11.942278 1011460 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0116 03:19:11.942348 1011460 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0116 03:19:11.942424 1011460 kubeadm.go:322] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0116 03:19:11.942487 1011460 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0116 03:19:11.942579 1011460 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0116 03:19:11.942659 1011460 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0116 03:19:11.942779 1011460 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0116 03:19:11.942856 1011460 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0116 03:19:11.944468 1011460 out.go:204]   - Booting up control plane ...
	I0116 03:19:11.944556 1011460 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0116 03:19:11.944624 1011460 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0116 03:19:11.944694 1011460 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0116 03:19:11.944847 1011460 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0116 03:19:11.944975 1011460 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0116 03:19:11.945039 1011460 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0116 03:19:11.945209 1011460 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0116 03:19:11.945282 1011460 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.502907 seconds
	I0116 03:19:11.945373 1011460 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0116 03:19:11.945476 1011460 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0116 03:19:11.945541 1011460 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0116 03:19:11.945750 1011460 kubeadm.go:322] [mark-control-plane] Marking the node no-preload-934668 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0116 03:19:11.945823 1011460 kubeadm.go:322] [bootstrap-token] Using token: pj08z0.5ut3mf4afujawh3s
	I0116 03:19:11.947396 1011460 out.go:204]   - Configuring RBAC rules ...
	I0116 03:19:11.947532 1011460 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0116 03:19:11.947645 1011460 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0116 03:19:11.947822 1011460 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0116 03:19:11.948000 1011460 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0116 03:19:11.948094 1011460 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0116 03:19:11.948182 1011460 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0116 03:19:11.948281 1011460 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0116 03:19:11.948327 1011460 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0116 03:19:11.948373 1011460 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0116 03:19:11.948383 1011460 kubeadm.go:322] 
	I0116 03:19:11.948440 1011460 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0116 03:19:11.948449 1011460 kubeadm.go:322] 
	I0116 03:19:11.948546 1011460 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0116 03:19:11.948567 1011460 kubeadm.go:322] 
	I0116 03:19:11.948614 1011460 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0116 03:19:11.948725 1011460 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0116 03:19:11.948805 1011460 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0116 03:19:11.948815 1011460 kubeadm.go:322] 
	I0116 03:19:11.948891 1011460 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0116 03:19:11.948901 1011460 kubeadm.go:322] 
	I0116 03:19:11.948979 1011460 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0116 03:19:11.949011 1011460 kubeadm.go:322] 
	I0116 03:19:11.949086 1011460 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0116 03:19:11.949215 1011460 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0116 03:19:11.949311 1011460 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0116 03:19:11.949332 1011460 kubeadm.go:322] 
	I0116 03:19:11.949463 1011460 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0116 03:19:11.949576 1011460 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0116 03:19:11.949590 1011460 kubeadm.go:322] 
	I0116 03:19:11.949688 1011460 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token pj08z0.5ut3mf4afujawh3s \
	I0116 03:19:11.949837 1011460 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:2b2d63eda0b757db09586d44c0338fc83976b062d99727312f950d3846e4844b \
	I0116 03:19:11.949877 1011460 kubeadm.go:322] 	--control-plane 
	I0116 03:19:11.949890 1011460 kubeadm.go:322] 
	I0116 03:19:11.949997 1011460 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0116 03:19:11.950009 1011460 kubeadm.go:322] 
	I0116 03:19:11.950108 1011460 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token pj08z0.5ut3mf4afujawh3s \
	I0116 03:19:11.950232 1011460 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:2b2d63eda0b757db09586d44c0338fc83976b062d99727312f950d3846e4844b 
	I0116 03:19:11.950269 1011460 cni.go:84] Creating CNI manager for ""
	I0116 03:19:11.950284 1011460 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 03:19:11.952013 1011460 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0116 03:19:11.953373 1011460 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0116 03:19:12.016915 1011460 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0116 03:19:12.042169 1011460 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0116 03:19:12.042259 1011460 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=880ec6d67bbc7fe8882aaa6543d5a07427c973fd minikube.k8s.io/name=no-preload-934668 minikube.k8s.io/updated_at=2024_01_16T03_19_12_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:19:12.042266 1011460 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:19:12.092434 1011460 ops.go:34] apiserver oom_adj: -16
	I0116 03:19:13.250984 1011681 system_pods.go:86] 4 kube-system pods found
	I0116 03:19:13.251026 1011681 system_pods.go:89] "coredns-5644d7b6d9-qmzl6" [3e4b23ca-c18b-4158-b15b-df53326b384c] Running
	I0116 03:19:13.251035 1011681 system_pods.go:89] "kube-proxy-tv7gz" [a1bf5e59-b2ae-489c-8297-25ad7c456303] Running
	I0116 03:19:13.251046 1011681 system_pods.go:89] "metrics-server-74d5856cc6-tx8jt" [790d860a-d27f-4535-94d8-64f40cb79071] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:19:13.251054 1011681 system_pods.go:89] "storage-provisioner" [aa5fac91-1606-4716-a04a-18e9d80c926b] Running
	I0116 03:19:13.251078 1011681 retry.go:31] will retry after 3.301960932s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0116 03:19:12.168237 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:19:14.669074 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:19:12.372548 1011460 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:19:12.873171 1011460 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:19:13.372932 1011460 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:19:13.873086 1011460 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:19:14.373328 1011460 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:19:14.873249 1011460 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:19:15.372564 1011460 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:19:15.873604 1011460 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:19:16.372846 1011460 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:19:16.873652 1011460 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:19:16.558984 1011681 system_pods.go:86] 4 kube-system pods found
	I0116 03:19:16.559016 1011681 system_pods.go:89] "coredns-5644d7b6d9-qmzl6" [3e4b23ca-c18b-4158-b15b-df53326b384c] Running
	I0116 03:19:16.559023 1011681 system_pods.go:89] "kube-proxy-tv7gz" [a1bf5e59-b2ae-489c-8297-25ad7c456303] Running
	I0116 03:19:16.559031 1011681 system_pods.go:89] "metrics-server-74d5856cc6-tx8jt" [790d860a-d27f-4535-94d8-64f40cb79071] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:19:16.559036 1011681 system_pods.go:89] "storage-provisioner" [aa5fac91-1606-4716-a04a-18e9d80c926b] Running
	I0116 03:19:16.559056 1011681 retry.go:31] will retry after 4.433753761s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0116 03:19:17.166555 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:19:19.666500 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:19:17.373434 1011460 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:19:17.873591 1011460 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:19:18.373340 1011460 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:19:18.873267 1011460 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:19:19.373311 1011460 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:19:19.873538 1011460 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:19:20.372770 1011460 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:19:20.873645 1011460 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:19:21.373033 1011460 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:19:21.872773 1011460 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:19:22.372607 1011460 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:19:22.872582 1011460 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:19:23.372659 1011460 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:19:23.873410 1011460 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:19:24.372682 1011460 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:19:24.873365 1011460 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:19:24.989170 1011460 kubeadm.go:1088] duration metric: took 12.946988185s to wait for elevateKubeSystemPrivileges.
	I0116 03:19:24.989221 1011460 kubeadm.go:406] StartCluster complete in 5m13.370173315s
	I0116 03:19:24.989247 1011460 settings.go:142] acquiring lock: {Name:mk39c69fb5454efd5afab021c02c3bec1f1b4e6b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:19:24.989351 1011460 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17967-971255/kubeconfig
	I0116 03:19:24.991793 1011460 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17967-971255/kubeconfig: {Name:mk5ca3018274b0a03599cf3631785c0629f81a67 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:19:24.992117 1011460 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0116 03:19:24.992155 1011460 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0116 03:19:24.992266 1011460 addons.go:69] Setting storage-provisioner=true in profile "no-preload-934668"
	I0116 03:19:24.992274 1011460 addons.go:69] Setting default-storageclass=true in profile "no-preload-934668"
	I0116 03:19:24.992291 1011460 addons.go:234] Setting addon storage-provisioner=true in "no-preload-934668"
	I0116 03:19:24.992295 1011460 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-934668"
	I0116 03:19:24.992296 1011460 addons.go:69] Setting metrics-server=true in profile "no-preload-934668"
	I0116 03:19:24.992325 1011460 addons.go:234] Setting addon metrics-server=true in "no-preload-934668"
	W0116 03:19:24.992338 1011460 addons.go:243] addon metrics-server should already be in state true
	I0116 03:19:24.992393 1011460 config.go:182] Loaded profile config "no-preload-934668": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	W0116 03:19:24.992300 1011460 addons.go:243] addon storage-provisioner should already be in state true
	I0116 03:19:24.992415 1011460 host.go:66] Checking if "no-preload-934668" exists ...
	I0116 03:19:24.992456 1011460 host.go:66] Checking if "no-preload-934668" exists ...
	I0116 03:19:24.992754 1011460 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:19:24.992775 1011460 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:19:24.992810 1011460 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:19:24.992831 1011460 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:19:24.992871 1011460 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:19:24.992959 1011460 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:19:25.010903 1011460 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33013
	I0116 03:19:25.011636 1011460 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:19:25.012150 1011460 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39939
	I0116 03:19:25.012167 1011460 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39475
	I0116 03:19:25.012223 1011460 main.go:141] libmachine: Using API Version  1
	I0116 03:19:25.012247 1011460 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:19:25.012568 1011460 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:19:25.012669 1011460 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:19:25.012784 1011460 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:19:25.013013 1011460 main.go:141] libmachine: Using API Version  1
	I0116 03:19:25.013037 1011460 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:19:25.013189 1011460 main.go:141] libmachine: Using API Version  1
	I0116 03:19:25.013202 1011460 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:19:25.013647 1011460 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:19:25.013677 1011460 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:19:25.014037 1011460 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:19:25.014040 1011460 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:19:25.014620 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetState
	I0116 03:19:25.014622 1011460 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:19:25.014713 1011460 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:19:25.018506 1011460 addons.go:234] Setting addon default-storageclass=true in "no-preload-934668"
	W0116 03:19:25.018563 1011460 addons.go:243] addon default-storageclass should already be in state true
	I0116 03:19:25.018603 1011460 host.go:66] Checking if "no-preload-934668" exists ...
	I0116 03:19:25.019024 1011460 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:19:25.019089 1011460 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:19:25.034161 1011460 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43463
	I0116 03:19:25.034400 1011460 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40521
	I0116 03:19:25.034909 1011460 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:19:25.035027 1011460 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:19:25.035536 1011460 main.go:141] libmachine: Using API Version  1
	I0116 03:19:25.035555 1011460 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:19:25.035687 1011460 main.go:141] libmachine: Using API Version  1
	I0116 03:19:25.035698 1011460 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:19:25.036064 1011460 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:19:25.036123 1011460 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:19:25.036296 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetState
	I0116 03:19:25.036323 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetState
	I0116 03:19:25.037452 1011460 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39249
	I0116 03:19:25.038065 1011460 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:19:25.038653 1011460 main.go:141] libmachine: (no-preload-934668) Calling .DriverName
	I0116 03:19:25.038797 1011460 main.go:141] libmachine: Using API Version  1
	I0116 03:19:25.038807 1011460 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:19:25.040516 1011460 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 03:19:25.039169 1011460 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:19:25.039494 1011460 main.go:141] libmachine: (no-preload-934668) Calling .DriverName
	I0116 03:19:25.041993 1011460 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 03:19:25.042021 1011460 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0116 03:19:25.042042 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetSSHHostname
	I0116 03:19:25.043350 1011460 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0116 03:19:20.998514 1011681 system_pods.go:86] 4 kube-system pods found
	I0116 03:19:20.998541 1011681 system_pods.go:89] "coredns-5644d7b6d9-qmzl6" [3e4b23ca-c18b-4158-b15b-df53326b384c] Running
	I0116 03:19:20.998546 1011681 system_pods.go:89] "kube-proxy-tv7gz" [a1bf5e59-b2ae-489c-8297-25ad7c456303] Running
	I0116 03:19:20.998553 1011681 system_pods.go:89] "metrics-server-74d5856cc6-tx8jt" [790d860a-d27f-4535-94d8-64f40cb79071] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:19:20.998558 1011681 system_pods.go:89] "storage-provisioner" [aa5fac91-1606-4716-a04a-18e9d80c926b] Running
	I0116 03:19:20.998576 1011681 retry.go:31] will retry after 6.19070677s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0116 03:19:22.164973 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:19:24.165241 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:19:25.044790 1011460 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0116 03:19:25.044804 1011460 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0116 03:19:25.044820 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetSSHHostname
	I0116 03:19:25.042734 1011460 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:19:25.044907 1011460 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:19:25.045505 1011460 main.go:141] libmachine: (no-preload-934668) DBG | domain no-preload-934668 has defined MAC address 52:54:00:96:89:86 in network mk-no-preload-934668
	I0116 03:19:25.046226 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetSSHPort
	I0116 03:19:25.046284 1011460 main.go:141] libmachine: (no-preload-934668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:89:86", ip: ""} in network mk-no-preload-934668: {Iface:virbr2 ExpiryTime:2024-01-16 04:13:45 +0000 UTC Type:0 Mac:52:54:00:96:89:86 Iaid: IPaddr:192.168.50.29 Prefix:24 Hostname:no-preload-934668 Clientid:01:52:54:00:96:89:86}
	I0116 03:19:25.046404 1011460 main.go:141] libmachine: (no-preload-934668) DBG | domain no-preload-934668 has defined IP address 192.168.50.29 and MAC address 52:54:00:96:89:86 in network mk-no-preload-934668
	I0116 03:19:25.046434 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetSSHKeyPath
	I0116 03:19:25.046724 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetSSHUsername
	I0116 03:19:25.046878 1011460 sshutil.go:53] new ssh client: &{IP:192.168.50.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/no-preload-934668/id_rsa Username:docker}
	I0116 03:19:25.048780 1011460 main.go:141] libmachine: (no-preload-934668) DBG | domain no-preload-934668 has defined MAC address 52:54:00:96:89:86 in network mk-no-preload-934668
	I0116 03:19:25.049237 1011460 main.go:141] libmachine: (no-preload-934668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:89:86", ip: ""} in network mk-no-preload-934668: {Iface:virbr2 ExpiryTime:2024-01-16 04:13:45 +0000 UTC Type:0 Mac:52:54:00:96:89:86 Iaid: IPaddr:192.168.50.29 Prefix:24 Hostname:no-preload-934668 Clientid:01:52:54:00:96:89:86}
	I0116 03:19:25.049260 1011460 main.go:141] libmachine: (no-preload-934668) DBG | domain no-preload-934668 has defined IP address 192.168.50.29 and MAC address 52:54:00:96:89:86 in network mk-no-preload-934668
	I0116 03:19:25.049432 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetSSHPort
	I0116 03:19:25.049846 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetSSHKeyPath
	I0116 03:19:25.050200 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetSSHUsername
	I0116 03:19:25.050376 1011460 sshutil.go:53] new ssh client: &{IP:192.168.50.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/no-preload-934668/id_rsa Username:docker}
	I0116 03:19:25.062306 1011460 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40141
	I0116 03:19:25.062765 1011460 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:19:25.063248 1011460 main.go:141] libmachine: Using API Version  1
	I0116 03:19:25.063261 1011460 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:19:25.063609 1011460 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:19:25.063805 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetState
	I0116 03:19:25.065537 1011460 main.go:141] libmachine: (no-preload-934668) Calling .DriverName
	I0116 03:19:25.065785 1011460 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0116 03:19:25.065818 1011460 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0116 03:19:25.065841 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetSSHHostname
	I0116 03:19:25.068664 1011460 main.go:141] libmachine: (no-preload-934668) DBG | domain no-preload-934668 has defined MAC address 52:54:00:96:89:86 in network mk-no-preload-934668
	I0116 03:19:25.069102 1011460 main.go:141] libmachine: (no-preload-934668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:89:86", ip: ""} in network mk-no-preload-934668: {Iface:virbr2 ExpiryTime:2024-01-16 04:13:45 +0000 UTC Type:0 Mac:52:54:00:96:89:86 Iaid: IPaddr:192.168.50.29 Prefix:24 Hostname:no-preload-934668 Clientid:01:52:54:00:96:89:86}
	I0116 03:19:25.069125 1011460 main.go:141] libmachine: (no-preload-934668) DBG | domain no-preload-934668 has defined IP address 192.168.50.29 and MAC address 52:54:00:96:89:86 in network mk-no-preload-934668
	I0116 03:19:25.069273 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetSSHPort
	I0116 03:19:25.069454 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetSSHKeyPath
	I0116 03:19:25.069627 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetSSHUsername
	I0116 03:19:25.069763 1011460 sshutil.go:53] new ssh client: &{IP:192.168.50.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/no-preload-934668/id_rsa Username:docker}
	I0116 03:19:25.182658 1011460 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0116 03:19:25.209575 1011460 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 03:19:25.231221 1011460 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0116 03:19:25.231310 1011460 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0116 03:19:25.287263 1011460 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0116 03:19:25.337307 1011460 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0116 03:19:25.337350 1011460 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0116 03:19:25.433778 1011460 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0116 03:19:25.433821 1011460 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0116 03:19:25.507802 1011460 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0116 03:19:25.528239 1011460 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-934668" context rescaled to 1 replicas
	I0116 03:19:25.528282 1011460 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.29 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0116 03:19:25.530067 1011460 out.go:177] * Verifying Kubernetes components...
	I0116 03:19:25.532055 1011460 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 03:19:26.021224 1011460 start.go:929] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I0116 03:19:26.359779 1011460 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.072464523s)
	I0116 03:19:26.359844 1011460 main.go:141] libmachine: Making call to close driver server
	I0116 03:19:26.359859 1011460 main.go:141] libmachine: (no-preload-934668) Calling .Close
	I0116 03:19:26.359860 1011460 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.150243124s)
	I0116 03:19:26.359900 1011460 main.go:141] libmachine: Making call to close driver server
	I0116 03:19:26.359919 1011460 main.go:141] libmachine: (no-preload-934668) Calling .Close
	I0116 03:19:26.360228 1011460 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:19:26.360258 1011460 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:19:26.360269 1011460 main.go:141] libmachine: Making call to close driver server
	I0116 03:19:26.360278 1011460 main.go:141] libmachine: (no-preload-934668) Calling .Close
	I0116 03:19:26.360447 1011460 main.go:141] libmachine: (no-preload-934668) DBG | Closing plugin on server side
	I0116 03:19:26.360507 1011460 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:19:26.360546 1011460 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:19:26.360560 1011460 main.go:141] libmachine: (no-preload-934668) DBG | Closing plugin on server side
	I0116 03:19:26.361873 1011460 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:19:26.361895 1011460 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:19:26.361911 1011460 main.go:141] libmachine: Making call to close driver server
	I0116 03:19:26.361920 1011460 main.go:141] libmachine: (no-preload-934668) Calling .Close
	I0116 03:19:26.362297 1011460 main.go:141] libmachine: (no-preload-934668) DBG | Closing plugin on server side
	I0116 03:19:26.362339 1011460 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:19:26.362372 1011460 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:19:26.376371 1011460 main.go:141] libmachine: Making call to close driver server
	I0116 03:19:26.376405 1011460 main.go:141] libmachine: (no-preload-934668) Calling .Close
	I0116 03:19:26.376703 1011460 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:19:26.376722 1011460 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:19:26.607902 1011460 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.100046486s)
	I0116 03:19:26.607968 1011460 main.go:141] libmachine: Making call to close driver server
	I0116 03:19:26.607973 1011460 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.075879995s)
	I0116 03:19:26.608021 1011460 node_ready.go:35] waiting up to 6m0s for node "no-preload-934668" to be "Ready" ...
	I0116 03:19:26.607985 1011460 main.go:141] libmachine: (no-preload-934668) Calling .Close
	I0116 03:19:26.608450 1011460 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:19:26.608470 1011460 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:19:26.608483 1011460 main.go:141] libmachine: Making call to close driver server
	I0116 03:19:26.608493 1011460 main.go:141] libmachine: (no-preload-934668) Calling .Close
	I0116 03:19:26.608771 1011460 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:19:26.608791 1011460 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:19:26.608794 1011460 main.go:141] libmachine: (no-preload-934668) DBG | Closing plugin on server side
	I0116 03:19:26.608803 1011460 addons.go:470] Verifying addon metrics-server=true in "no-preload-934668"
	I0116 03:19:26.611385 1011460 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0116 03:19:26.612672 1011460 addons.go:505] enable addons completed in 1.620530835s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0116 03:19:26.611903 1011460 node_ready.go:49] node "no-preload-934668" has status "Ready":"True"
	I0116 03:19:26.612707 1011460 node_ready.go:38] duration metric: took 4.665246ms waiting for node "no-preload-934668" to be "Ready" ...
	I0116 03:19:26.612719 1011460 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:19:26.625443 1011460 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-64qzh" in "kube-system" namespace to be "Ready" ...
	I0116 03:19:27.195320 1011681 system_pods.go:86] 4 kube-system pods found
	I0116 03:19:27.195364 1011681 system_pods.go:89] "coredns-5644d7b6d9-qmzl6" [3e4b23ca-c18b-4158-b15b-df53326b384c] Running
	I0116 03:19:27.195375 1011681 system_pods.go:89] "kube-proxy-tv7gz" [a1bf5e59-b2ae-489c-8297-25ad7c456303] Running
	I0116 03:19:27.195388 1011681 system_pods.go:89] "metrics-server-74d5856cc6-tx8jt" [790d860a-d27f-4535-94d8-64f40cb79071] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:19:27.195396 1011681 system_pods.go:89] "storage-provisioner" [aa5fac91-1606-4716-a04a-18e9d80c926b] Running
	I0116 03:19:27.195423 1011681 retry.go:31] will retry after 6.009246504s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0116 03:19:26.166175 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:19:28.167332 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:19:27.632495 1011460 pod_ready.go:97] error getting pod "coredns-76f75df574-64qzh" in "kube-system" namespace (skipping!): pods "coredns-76f75df574-64qzh" not found
	I0116 03:19:27.632522 1011460 pod_ready.go:81] duration metric: took 1.007051516s waiting for pod "coredns-76f75df574-64qzh" in "kube-system" namespace to be "Ready" ...
	E0116 03:19:27.632534 1011460 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-76f75df574-64qzh" in "kube-system" namespace (skipping!): pods "coredns-76f75df574-64qzh" not found
	I0116 03:19:27.632541 1011460 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-k2kc7" in "kube-system" namespace to be "Ready" ...
	I0116 03:19:29.640682 1011460 pod_ready.go:92] pod "coredns-76f75df574-k2kc7" in "kube-system" namespace has status "Ready":"True"
	I0116 03:19:29.640718 1011460 pod_ready.go:81] duration metric: took 2.008169192s waiting for pod "coredns-76f75df574-k2kc7" in "kube-system" namespace to be "Ready" ...
	I0116 03:19:29.640736 1011460 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-934668" in "kube-system" namespace to be "Ready" ...
	I0116 03:19:29.646552 1011460 pod_ready.go:92] pod "etcd-no-preload-934668" in "kube-system" namespace has status "Ready":"True"
	I0116 03:19:29.646579 1011460 pod_ready.go:81] duration metric: took 5.835401ms waiting for pod "etcd-no-preload-934668" in "kube-system" namespace to be "Ready" ...
	I0116 03:19:29.646589 1011460 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-934668" in "kube-system" namespace to be "Ready" ...
	I0116 03:19:29.651970 1011460 pod_ready.go:92] pod "kube-apiserver-no-preload-934668" in "kube-system" namespace has status "Ready":"True"
	I0116 03:19:29.652004 1011460 pod_ready.go:81] duration metric: took 5.40828ms waiting for pod "kube-apiserver-no-preload-934668" in "kube-system" namespace to be "Ready" ...
	I0116 03:19:29.652018 1011460 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-934668" in "kube-system" namespace to be "Ready" ...
	I0116 03:19:29.658077 1011460 pod_ready.go:92] pod "kube-controller-manager-no-preload-934668" in "kube-system" namespace has status "Ready":"True"
	I0116 03:19:29.658104 1011460 pod_ready.go:81] duration metric: took 6.078615ms waiting for pod "kube-controller-manager-no-preload-934668" in "kube-system" namespace to be "Ready" ...
	I0116 03:19:29.658113 1011460 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fr424" in "kube-system" namespace to be "Ready" ...
	I0116 03:19:29.663585 1011460 pod_ready.go:92] pod "kube-proxy-fr424" in "kube-system" namespace has status "Ready":"True"
	I0116 03:19:29.663608 1011460 pod_ready.go:81] duration metric: took 5.488053ms waiting for pod "kube-proxy-fr424" in "kube-system" namespace to be "Ready" ...
	I0116 03:19:29.663617 1011460 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-934668" in "kube-system" namespace to be "Ready" ...
	I0116 03:19:30.037029 1011460 pod_ready.go:92] pod "kube-scheduler-no-preload-934668" in "kube-system" namespace has status "Ready":"True"
	I0116 03:19:30.037054 1011460 pod_ready.go:81] duration metric: took 373.431547ms waiting for pod "kube-scheduler-no-preload-934668" in "kube-system" namespace to be "Ready" ...
	I0116 03:19:30.037066 1011460 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace to be "Ready" ...
	I0116 03:19:32.045895 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:19:33.211194 1011681 system_pods.go:86] 5 kube-system pods found
	I0116 03:19:33.211224 1011681 system_pods.go:89] "coredns-5644d7b6d9-qmzl6" [3e4b23ca-c18b-4158-b15b-df53326b384c] Running
	I0116 03:19:33.211230 1011681 system_pods.go:89] "kube-proxy-tv7gz" [a1bf5e59-b2ae-489c-8297-25ad7c456303] Running
	I0116 03:19:33.211234 1011681 system_pods.go:89] "kube-scheduler-old-k8s-version-788237" [738a1d18-b8a8-429e-b335-a542be90f4db] Pending
	I0116 03:19:33.211240 1011681 system_pods.go:89] "metrics-server-74d5856cc6-tx8jt" [790d860a-d27f-4535-94d8-64f40cb79071] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:19:33.211245 1011681 system_pods.go:89] "storage-provisioner" [aa5fac91-1606-4716-a04a-18e9d80c926b] Running
	I0116 03:19:33.211264 1011681 retry.go:31] will retry after 6.865213703s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0116 03:19:30.664955 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:19:33.164999 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:19:35.168217 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:19:34.545787 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:19:37.045220 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:19:40.083281 1011681 system_pods.go:86] 5 kube-system pods found
	I0116 03:19:40.083312 1011681 system_pods.go:89] "coredns-5644d7b6d9-qmzl6" [3e4b23ca-c18b-4158-b15b-df53326b384c] Running
	I0116 03:19:40.083317 1011681 system_pods.go:89] "kube-proxy-tv7gz" [a1bf5e59-b2ae-489c-8297-25ad7c456303] Running
	I0116 03:19:40.083322 1011681 system_pods.go:89] "kube-scheduler-old-k8s-version-788237" [738a1d18-b8a8-429e-b335-a542be90f4db] Running
	I0116 03:19:40.083329 1011681 system_pods.go:89] "metrics-server-74d5856cc6-tx8jt" [790d860a-d27f-4535-94d8-64f40cb79071] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:19:40.083333 1011681 system_pods.go:89] "storage-provisioner" [aa5fac91-1606-4716-a04a-18e9d80c926b] Running
	I0116 03:19:40.083354 1011681 retry.go:31] will retry after 12.14535235s: missing components: etcd, kube-apiserver, kube-controller-manager
	I0116 03:19:37.664530 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:19:39.666312 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:19:39.544826 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:19:41.545124 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:19:42.167148 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:19:44.666332 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:19:44.046884 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:19:46.546221 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:19:47.165232 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:19:49.165989 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:19:49.045230 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:19:51.045508 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:19:52.235832 1011681 system_pods.go:86] 8 kube-system pods found
	I0116 03:19:52.235865 1011681 system_pods.go:89] "coredns-5644d7b6d9-qmzl6" [3e4b23ca-c18b-4158-b15b-df53326b384c] Running
	I0116 03:19:52.235870 1011681 system_pods.go:89] "etcd-old-k8s-version-788237" [d4e1632d-c3ce-47c0-a692-0d108bd3c46c] Running
	I0116 03:19:52.235874 1011681 system_pods.go:89] "kube-apiserver-old-k8s-version-788237" [6d662cac-b4ba-4b5a-a942-38056d2aab63] Running
	I0116 03:19:52.235878 1011681 system_pods.go:89] "kube-controller-manager-old-k8s-version-788237" [2ccd00ed-668e-40b6-b364-63e7a85d4fe9] Pending
	I0116 03:19:52.235882 1011681 system_pods.go:89] "kube-proxy-tv7gz" [a1bf5e59-b2ae-489c-8297-25ad7c456303] Running
	I0116 03:19:52.235887 1011681 system_pods.go:89] "kube-scheduler-old-k8s-version-788237" [738a1d18-b8a8-429e-b335-a542be90f4db] Running
	I0116 03:19:52.235892 1011681 system_pods.go:89] "metrics-server-74d5856cc6-tx8jt" [790d860a-d27f-4535-94d8-64f40cb79071] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:19:52.235897 1011681 system_pods.go:89] "storage-provisioner" [aa5fac91-1606-4716-a04a-18e9d80c926b] Running
	I0116 03:19:52.235916 1011681 retry.go:31] will retry after 13.113559392s: missing components: kube-controller-manager
	I0116 03:19:51.665249 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:19:53.667802 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:19:53.544777 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:19:55.545265 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:19:56.166884 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:19:58.167295 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:19:58.046171 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:00.545977 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:05.356292 1011681 system_pods.go:86] 8 kube-system pods found
	I0116 03:20:05.356332 1011681 system_pods.go:89] "coredns-5644d7b6d9-qmzl6" [3e4b23ca-c18b-4158-b15b-df53326b384c] Running
	I0116 03:20:05.356340 1011681 system_pods.go:89] "etcd-old-k8s-version-788237" [d4e1632d-c3ce-47c0-a692-0d108bd3c46c] Running
	I0116 03:20:05.356347 1011681 system_pods.go:89] "kube-apiserver-old-k8s-version-788237" [6d662cac-b4ba-4b5a-a942-38056d2aab63] Running
	I0116 03:20:05.356355 1011681 system_pods.go:89] "kube-controller-manager-old-k8s-version-788237" [2ccd00ed-668e-40b6-b364-63e7a85d4fe9] Running
	I0116 03:20:05.356361 1011681 system_pods.go:89] "kube-proxy-tv7gz" [a1bf5e59-b2ae-489c-8297-25ad7c456303] Running
	I0116 03:20:05.356367 1011681 system_pods.go:89] "kube-scheduler-old-k8s-version-788237" [738a1d18-b8a8-429e-b335-a542be90f4db] Running
	I0116 03:20:05.356379 1011681 system_pods.go:89] "metrics-server-74d5856cc6-tx8jt" [790d860a-d27f-4535-94d8-64f40cb79071] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:20:05.356392 1011681 system_pods.go:89] "storage-provisioner" [aa5fac91-1606-4716-a04a-18e9d80c926b] Running
	I0116 03:20:05.356405 1011681 system_pods.go:126] duration metric: took 1m4.991757131s to wait for k8s-apps to be running ...
	I0116 03:20:05.356417 1011681 system_svc.go:44] waiting for kubelet service to be running ....
	I0116 03:20:05.356484 1011681 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 03:20:05.373421 1011681 system_svc.go:56] duration metric: took 16.991793ms WaitForService to wait for kubelet.
	I0116 03:20:05.373453 1011681 kubeadm.go:581] duration metric: took 1m11.178099498s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0116 03:20:05.373474 1011681 node_conditions.go:102] verifying NodePressure condition ...
	I0116 03:20:05.377261 1011681 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 03:20:05.377289 1011681 node_conditions.go:123] node cpu capacity is 2
	I0116 03:20:05.377303 1011681 node_conditions.go:105] duration metric: took 3.824619ms to run NodePressure ...
	I0116 03:20:05.377315 1011681 start.go:228] waiting for startup goroutines ...
	I0116 03:20:05.377324 1011681 start.go:233] waiting for cluster config update ...
	I0116 03:20:05.377340 1011681 start.go:242] writing updated cluster config ...
	I0116 03:20:05.377691 1011681 ssh_runner.go:195] Run: rm -f paused
	I0116 03:20:05.433407 1011681 start.go:600] kubectl: 1.29.0, cluster: 1.16.0 (minor skew: 13)
	I0116 03:20:05.435544 1011681 out.go:177] 
	W0116 03:20:05.437104 1011681 out.go:239] ! /usr/local/bin/kubectl is version 1.29.0, which may have incompatibilities with Kubernetes 1.16.0.
	I0116 03:20:05.438355 1011681 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I0116 03:20:05.439604 1011681 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-788237" cluster and "default" namespace by default
	I0116 03:20:00.665894 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:03.166003 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:03.046349 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:05.047570 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:05.669899 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:08.165604 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:07.545964 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:10.045541 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:10.665401 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:12.666068 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:15.165456 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:12.545270 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:15.044498 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:17.044757 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:17.664970 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:20.170600 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:19.045718 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:21.545760 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:22.665734 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:24.666166 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:24.046926 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:26.545103 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:26.666505 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:29.166514 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:28.545929 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:31.048171 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:31.166637 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:33.665953 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:33.548606 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:35.561699 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:35.666414 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:38.165516 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:38.045658 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:40.544791 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:40.667352 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:43.165494 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:45.166150 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:42.545935 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:45.045849 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:47.667601 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:49.667904 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:47.546691 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:50.044945 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:52.046574 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:52.165607 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:54.666005 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:54.544893 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:57.048203 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:56.666062 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:58.666122 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:59.546941 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:01.547326 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:00.675116 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:03.165630 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:05.165989 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:04.045454 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:06.545774 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:07.665616 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:10.165283 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:09.045454 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:11.544234 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:12.166050 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:14.665663 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:13.546119 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:16.044940 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:16.666322 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:18.666577 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:18.545883 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:21.045761 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:21.165313 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:23.166487 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:23.543371 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:25.545045 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:25.666044 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:27.666372 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:30.166224 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:28.046020 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:30.545380 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:32.664709 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:34.665743 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:32.548394 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:35.044140 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:37.045266 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:36.666094 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:39.166598 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:39.544754 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:41.545120 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:41.665435 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:44.177500 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:44.046063 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:46.545258 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:46.665179 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:48.665479 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:49.045153 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:51.544430 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:50.665798 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:52.668246 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:55.164905 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:53.545067 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:55.548667 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:57.664986 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:00.166610 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:58.044255 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:00.046558 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:02.664972 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:04.665647 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:02.547522 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:05.045464 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:07.049814 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:07.165053 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:09.166438 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:09.545216 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:11.546990 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:11.166827 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:13.664900 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:13.547322 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:16.046930 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:15.667462 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:18.165667 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:20.167440 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:18.544902 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:20.545091 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:22.167972 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:24.665473 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:23.046783 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:25.546772 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:26.665601 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:28.667378 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:27.552093 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:30.045665 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:32.046723 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:31.166653 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:33.169992 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:34.546495 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:36.552400 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:35.667041 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:38.166719 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:39.045530 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:41.046225 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:40.664638 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:42.664974 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:45.167738 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:43.545469 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:46.045132 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:47.665457 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:50.165843 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:48.045266 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:50.544748 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:52.166892 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:54.170375 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:52.545596 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:54.546876 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:57.048120 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:56.664513 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:57.165325 1011955 pod_ready.go:81] duration metric: took 4m0.008324579s waiting for pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace to be "Ready" ...
	E0116 03:22:57.165356 1011955 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0116 03:22:57.165370 1011955 pod_ready.go:38] duration metric: took 4m1.181991459s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:22:57.165388 1011955 api_server.go:52] waiting for apiserver process to appear ...
	I0116 03:22:57.165528 1011955 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0116 03:22:57.165670 1011955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0116 03:22:57.223487 1011955 cri.go:89] found id: "94ed68f3d4f24313e6a1fce6e4b90b946fd2225aac302fced35d46a9deda6a24"
	I0116 03:22:57.223515 1011955 cri.go:89] found id: ""
	I0116 03:22:57.223523 1011955 logs.go:284] 1 containers: [94ed68f3d4f24313e6a1fce6e4b90b946fd2225aac302fced35d46a9deda6a24]
	I0116 03:22:57.223579 1011955 ssh_runner.go:195] Run: which crictl
	I0116 03:22:57.228506 1011955 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0116 03:22:57.228603 1011955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0116 03:22:57.275655 1011955 cri.go:89] found id: "c4fca60077d67415eed36ea7fec522e468538059f422909136109bb3049be2c8"
	I0116 03:22:57.275681 1011955 cri.go:89] found id: ""
	I0116 03:22:57.275689 1011955 logs.go:284] 1 containers: [c4fca60077d67415eed36ea7fec522e468538059f422909136109bb3049be2c8]
	I0116 03:22:57.275747 1011955 ssh_runner.go:195] Run: which crictl
	I0116 03:22:57.280168 1011955 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0116 03:22:57.280248 1011955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0116 03:22:57.325379 1011955 cri.go:89] found id: "8c87760cc0f44eb99f0cb2d610478aa2e69f84449726d201a30521127a679760"
	I0116 03:22:57.325403 1011955 cri.go:89] found id: ""
	I0116 03:22:57.325412 1011955 logs.go:284] 1 containers: [8c87760cc0f44eb99f0cb2d610478aa2e69f84449726d201a30521127a679760]
	I0116 03:22:57.325485 1011955 ssh_runner.go:195] Run: which crictl
	I0116 03:22:57.330376 1011955 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0116 03:22:57.330456 1011955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0116 03:22:57.374600 1011955 cri.go:89] found id: "19ca9f9fb82670ac732e41c9fb8af63d1fad24065000c55e0a0eb9ddae08738e"
	I0116 03:22:57.374633 1011955 cri.go:89] found id: ""
	I0116 03:22:57.374644 1011955 logs.go:284] 1 containers: [19ca9f9fb82670ac732e41c9fb8af63d1fad24065000c55e0a0eb9ddae08738e]
	I0116 03:22:57.374731 1011955 ssh_runner.go:195] Run: which crictl
	I0116 03:22:57.379908 1011955 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0116 03:22:57.379996 1011955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0116 03:22:57.422495 1011955 cri.go:89] found id: "cd75d2109b8821b8657f2dd6c4f2c4ca736d012c15668e44c34657f674fd675f"
	I0116 03:22:57.422524 1011955 cri.go:89] found id: ""
	I0116 03:22:57.422535 1011955 logs.go:284] 1 containers: [cd75d2109b8821b8657f2dd6c4f2c4ca736d012c15668e44c34657f674fd675f]
	I0116 03:22:57.422599 1011955 ssh_runner.go:195] Run: which crictl
	I0116 03:22:57.427327 1011955 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0116 03:22:57.427398 1011955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0116 03:22:57.472666 1011955 cri.go:89] found id: "7cde9c38c1e733d3e9e470f1180254a447d2686e0139615245efc3f8e8a06e45"
	I0116 03:22:57.472698 1011955 cri.go:89] found id: ""
	I0116 03:22:57.472715 1011955 logs.go:284] 1 containers: [7cde9c38c1e733d3e9e470f1180254a447d2686e0139615245efc3f8e8a06e45]
	I0116 03:22:57.472773 1011955 ssh_runner.go:195] Run: which crictl
	I0116 03:22:57.477425 1011955 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0116 03:22:57.477487 1011955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0116 03:22:57.519963 1011955 cri.go:89] found id: ""
	I0116 03:22:57.519998 1011955 logs.go:284] 0 containers: []
	W0116 03:22:57.520008 1011955 logs.go:286] No container was found matching "kindnet"
	I0116 03:22:57.520018 1011955 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0116 03:22:57.520082 1011955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0116 03:22:57.563323 1011955 cri.go:89] found id: "f2b31947cd9ab2c8bd5934451dd476eedb4489db897be097f0a19bf23818e9da"
	I0116 03:22:57.563351 1011955 cri.go:89] found id: ""
	I0116 03:22:57.563361 1011955 logs.go:284] 1 containers: [f2b31947cd9ab2c8bd5934451dd476eedb4489db897be097f0a19bf23818e9da]
	I0116 03:22:57.563429 1011955 ssh_runner.go:195] Run: which crictl
	I0116 03:22:57.567849 1011955 logs.go:123] Gathering logs for kube-controller-manager [7cde9c38c1e733d3e9e470f1180254a447d2686e0139615245efc3f8e8a06e45] ...
	I0116 03:22:57.567885 1011955 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7cde9c38c1e733d3e9e470f1180254a447d2686e0139615245efc3f8e8a06e45"
	I0116 03:22:57.630746 1011955 logs.go:123] Gathering logs for storage-provisioner [f2b31947cd9ab2c8bd5934451dd476eedb4489db897be097f0a19bf23818e9da] ...
	I0116 03:22:57.630790 1011955 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f2b31947cd9ab2c8bd5934451dd476eedb4489db897be097f0a19bf23818e9da"
	I0116 03:22:57.685136 1011955 logs.go:123] Gathering logs for container status ...
	I0116 03:22:57.685175 1011955 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0116 03:22:57.744223 1011955 logs.go:123] Gathering logs for dmesg ...
	I0116 03:22:57.744253 1011955 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0116 03:22:57.758357 1011955 logs.go:123] Gathering logs for describe nodes ...
	I0116 03:22:57.758386 1011955 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0116 03:22:57.921587 1011955 logs.go:123] Gathering logs for kube-apiserver [94ed68f3d4f24313e6a1fce6e4b90b946fd2225aac302fced35d46a9deda6a24] ...
	I0116 03:22:57.921631 1011955 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 94ed68f3d4f24313e6a1fce6e4b90b946fd2225aac302fced35d46a9deda6a24"
	I0116 03:22:57.981922 1011955 logs.go:123] Gathering logs for etcd [c4fca60077d67415eed36ea7fec522e468538059f422909136109bb3049be2c8] ...
	I0116 03:22:57.981959 1011955 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c4fca60077d67415eed36ea7fec522e468538059f422909136109bb3049be2c8"
	I0116 03:22:58.036701 1011955 logs.go:123] Gathering logs for kube-proxy [cd75d2109b8821b8657f2dd6c4f2c4ca736d012c15668e44c34657f674fd675f] ...
	I0116 03:22:58.036735 1011955 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cd75d2109b8821b8657f2dd6c4f2c4ca736d012c15668e44c34657f674fd675f"
	I0116 03:22:58.078332 1011955 logs.go:123] Gathering logs for kubelet ...
	I0116 03:22:58.078366 1011955 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0116 03:22:58.163271 1011955 logs.go:138] Found kubelet problem: Jan 16 03:18:52 default-k8s-diff-port-775571 kubelet[3872]: W0116 03:18:52.493348    3872 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-775571" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-775571' and this object
	W0116 03:22:58.163463 1011955 logs.go:138] Found kubelet problem: Jan 16 03:18:52 default-k8s-diff-port-775571 kubelet[3872]: E0116 03:18:52.493417    3872 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-775571" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-775571' and this object
	I0116 03:22:58.186700 1011955 logs.go:123] Gathering logs for coredns [8c87760cc0f44eb99f0cb2d610478aa2e69f84449726d201a30521127a679760] ...
	I0116 03:22:58.186740 1011955 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8c87760cc0f44eb99f0cb2d610478aa2e69f84449726d201a30521127a679760"
	I0116 03:22:58.230943 1011955 logs.go:123] Gathering logs for kube-scheduler [19ca9f9fb82670ac732e41c9fb8af63d1fad24065000c55e0a0eb9ddae08738e] ...
	I0116 03:22:58.230987 1011955 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 19ca9f9fb82670ac732e41c9fb8af63d1fad24065000c55e0a0eb9ddae08738e"
	I0116 03:22:58.284787 1011955 logs.go:123] Gathering logs for CRI-O ...
	I0116 03:22:58.284826 1011955 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0116 03:22:58.711979 1011955 out.go:309] Setting ErrFile to fd 2...
	I0116 03:22:58.712020 1011955 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0116 03:22:58.712201 1011955 out.go:239] X Problems detected in kubelet:
	W0116 03:22:58.712218 1011955 out.go:239]   Jan 16 03:18:52 default-k8s-diff-port-775571 kubelet[3872]: W0116 03:18:52.493348    3872 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-775571" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-775571' and this object
	W0116 03:22:58.712232 1011955 out.go:239]   Jan 16 03:18:52 default-k8s-diff-port-775571 kubelet[3872]: E0116 03:18:52.493417    3872 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-775571" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-775571' and this object
	I0116 03:22:58.712247 1011955 out.go:309] Setting ErrFile to fd 2...
	I0116 03:22:58.712259 1011955 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 03:22:59.550035 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:23:02.045996 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:23:04.049349 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:23:06.545441 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:23:08.713432 1011955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:23:08.730913 1011955 api_server.go:72] duration metric: took 4m15.560433909s to wait for apiserver process to appear ...
	I0116 03:23:08.730953 1011955 api_server.go:88] waiting for apiserver healthz status ...
	I0116 03:23:08.731009 1011955 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0116 03:23:08.731083 1011955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0116 03:23:08.781386 1011955 cri.go:89] found id: "94ed68f3d4f24313e6a1fce6e4b90b946fd2225aac302fced35d46a9deda6a24"
	I0116 03:23:08.781415 1011955 cri.go:89] found id: ""
	I0116 03:23:08.781425 1011955 logs.go:284] 1 containers: [94ed68f3d4f24313e6a1fce6e4b90b946fd2225aac302fced35d46a9deda6a24]
	I0116 03:23:08.781487 1011955 ssh_runner.go:195] Run: which crictl
	I0116 03:23:08.787261 1011955 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0116 03:23:08.787341 1011955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0116 03:23:08.840893 1011955 cri.go:89] found id: "c4fca60077d67415eed36ea7fec522e468538059f422909136109bb3049be2c8"
	I0116 03:23:08.840929 1011955 cri.go:89] found id: ""
	I0116 03:23:08.840940 1011955 logs.go:284] 1 containers: [c4fca60077d67415eed36ea7fec522e468538059f422909136109bb3049be2c8]
	I0116 03:23:08.840996 1011955 ssh_runner.go:195] Run: which crictl
	I0116 03:23:08.846278 1011955 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0116 03:23:08.846350 1011955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0116 03:23:08.894119 1011955 cri.go:89] found id: "8c87760cc0f44eb99f0cb2d610478aa2e69f84449726d201a30521127a679760"
	I0116 03:23:08.894141 1011955 cri.go:89] found id: ""
	I0116 03:23:08.894149 1011955 logs.go:284] 1 containers: [8c87760cc0f44eb99f0cb2d610478aa2e69f84449726d201a30521127a679760]
	I0116 03:23:08.894204 1011955 ssh_runner.go:195] Run: which crictl
	I0116 03:23:08.899019 1011955 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0116 03:23:08.899088 1011955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0116 03:23:08.944579 1011955 cri.go:89] found id: "19ca9f9fb82670ac732e41c9fb8af63d1fad24065000c55e0a0eb9ddae08738e"
	I0116 03:23:08.944607 1011955 cri.go:89] found id: ""
	I0116 03:23:08.944616 1011955 logs.go:284] 1 containers: [19ca9f9fb82670ac732e41c9fb8af63d1fad24065000c55e0a0eb9ddae08738e]
	I0116 03:23:08.944689 1011955 ssh_runner.go:195] Run: which crictl
	I0116 03:23:08.948828 1011955 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0116 03:23:08.948907 1011955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0116 03:23:08.997870 1011955 cri.go:89] found id: "cd75d2109b8821b8657f2dd6c4f2c4ca736d012c15668e44c34657f674fd675f"
	I0116 03:23:08.997904 1011955 cri.go:89] found id: ""
	I0116 03:23:08.997916 1011955 logs.go:284] 1 containers: [cd75d2109b8821b8657f2dd6c4f2c4ca736d012c15668e44c34657f674fd675f]
	I0116 03:23:08.997987 1011955 ssh_runner.go:195] Run: which crictl
	I0116 03:23:09.002335 1011955 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0116 03:23:09.002420 1011955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0116 03:23:09.042381 1011955 cri.go:89] found id: "7cde9c38c1e733d3e9e470f1180254a447d2686e0139615245efc3f8e8a06e45"
	I0116 03:23:09.042408 1011955 cri.go:89] found id: ""
	I0116 03:23:09.042417 1011955 logs.go:284] 1 containers: [7cde9c38c1e733d3e9e470f1180254a447d2686e0139615245efc3f8e8a06e45]
	I0116 03:23:09.042481 1011955 ssh_runner.go:195] Run: which crictl
	I0116 03:23:09.047097 1011955 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0116 03:23:09.047180 1011955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0116 03:23:09.093592 1011955 cri.go:89] found id: ""
	I0116 03:23:09.093628 1011955 logs.go:284] 0 containers: []
	W0116 03:23:09.093639 1011955 logs.go:286] No container was found matching "kindnet"
	I0116 03:23:09.093648 1011955 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0116 03:23:09.093730 1011955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0116 03:23:09.142839 1011955 cri.go:89] found id: "f2b31947cd9ab2c8bd5934451dd476eedb4489db897be097f0a19bf23818e9da"
	I0116 03:23:09.142868 1011955 cri.go:89] found id: ""
	I0116 03:23:09.142878 1011955 logs.go:284] 1 containers: [f2b31947cd9ab2c8bd5934451dd476eedb4489db897be097f0a19bf23818e9da]
	I0116 03:23:09.142950 1011955 ssh_runner.go:195] Run: which crictl
	I0116 03:23:09.146997 1011955 logs.go:123] Gathering logs for CRI-O ...
	I0116 03:23:09.147032 1011955 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0116 03:23:09.550608 1011955 logs.go:123] Gathering logs for kubelet ...
	I0116 03:23:09.550654 1011955 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0116 03:23:09.637527 1011955 logs.go:138] Found kubelet problem: Jan 16 03:18:52 default-k8s-diff-port-775571 kubelet[3872]: W0116 03:18:52.493348    3872 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-775571" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-775571' and this object
	W0116 03:23:09.637714 1011955 logs.go:138] Found kubelet problem: Jan 16 03:18:52 default-k8s-diff-port-775571 kubelet[3872]: E0116 03:18:52.493417    3872 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-775571" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-775571' and this object
	I0116 03:23:09.660631 1011955 logs.go:123] Gathering logs for etcd [c4fca60077d67415eed36ea7fec522e468538059f422909136109bb3049be2c8] ...
	I0116 03:23:09.660676 1011955 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c4fca60077d67415eed36ea7fec522e468538059f422909136109bb3049be2c8"
	I0116 03:23:09.715818 1011955 logs.go:123] Gathering logs for kube-scheduler [19ca9f9fb82670ac732e41c9fb8af63d1fad24065000c55e0a0eb9ddae08738e] ...
	I0116 03:23:09.715860 1011955 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 19ca9f9fb82670ac732e41c9fb8af63d1fad24065000c55e0a0eb9ddae08738e"
	I0116 03:23:09.770445 1011955 logs.go:123] Gathering logs for coredns [8c87760cc0f44eb99f0cb2d610478aa2e69f84449726d201a30521127a679760] ...
	I0116 03:23:09.770487 1011955 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8c87760cc0f44eb99f0cb2d610478aa2e69f84449726d201a30521127a679760"
	I0116 03:23:09.817598 1011955 logs.go:123] Gathering logs for kube-proxy [cd75d2109b8821b8657f2dd6c4f2c4ca736d012c15668e44c34657f674fd675f] ...
	I0116 03:23:09.817640 1011955 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cd75d2109b8821b8657f2dd6c4f2c4ca736d012c15668e44c34657f674fd675f"
	I0116 03:23:09.866233 1011955 logs.go:123] Gathering logs for kube-controller-manager [7cde9c38c1e733d3e9e470f1180254a447d2686e0139615245efc3f8e8a06e45] ...
	I0116 03:23:09.866276 1011955 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7cde9c38c1e733d3e9e470f1180254a447d2686e0139615245efc3f8e8a06e45"
	I0116 03:23:09.929526 1011955 logs.go:123] Gathering logs for storage-provisioner [f2b31947cd9ab2c8bd5934451dd476eedb4489db897be097f0a19bf23818e9da] ...
	I0116 03:23:09.929564 1011955 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f2b31947cd9ab2c8bd5934451dd476eedb4489db897be097f0a19bf23818e9da"
	I0116 03:23:09.971573 1011955 logs.go:123] Gathering logs for container status ...
	I0116 03:23:09.971603 1011955 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0116 03:23:10.023976 1011955 logs.go:123] Gathering logs for dmesg ...
	I0116 03:23:10.024008 1011955 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0116 03:23:10.042100 1011955 logs.go:123] Gathering logs for describe nodes ...
	I0116 03:23:10.042140 1011955 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0116 03:23:10.197828 1011955 logs.go:123] Gathering logs for kube-apiserver [94ed68f3d4f24313e6a1fce6e4b90b946fd2225aac302fced35d46a9deda6a24] ...
	I0116 03:23:10.197867 1011955 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 94ed68f3d4f24313e6a1fce6e4b90b946fd2225aac302fced35d46a9deda6a24"
	I0116 03:23:10.248743 1011955 out.go:309] Setting ErrFile to fd 2...
	I0116 03:23:10.248783 1011955 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0116 03:23:10.248869 1011955 out.go:239] X Problems detected in kubelet:
	W0116 03:23:10.248882 1011955 out.go:239]   Jan 16 03:18:52 default-k8s-diff-port-775571 kubelet[3872]: W0116 03:18:52.493348    3872 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-775571" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-775571' and this object
	W0116 03:23:10.248900 1011955 out.go:239]   Jan 16 03:18:52 default-k8s-diff-port-775571 kubelet[3872]: E0116 03:18:52.493417    3872 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-775571" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-775571' and this object
	I0116 03:23:10.248913 1011955 out.go:309] Setting ErrFile to fd 2...
	I0116 03:23:10.248919 1011955 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 03:23:08.545744 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:23:11.045197 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:23:13.047444 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:23:15.544949 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:23:20.249250 1011955 api_server.go:253] Checking apiserver healthz at https://192.168.72.158:8444/healthz ...
	I0116 03:23:20.255958 1011955 api_server.go:279] https://192.168.72.158:8444/healthz returned 200:
	ok
	I0116 03:23:20.257425 1011955 api_server.go:141] control plane version: v1.28.4
	I0116 03:23:20.257457 1011955 api_server.go:131] duration metric: took 11.526494801s to wait for apiserver health ...
	I0116 03:23:20.257467 1011955 system_pods.go:43] waiting for kube-system pods to appear ...
	I0116 03:23:20.257504 1011955 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0116 03:23:20.257572 1011955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0116 03:23:20.304303 1011955 cri.go:89] found id: "94ed68f3d4f24313e6a1fce6e4b90b946fd2225aac302fced35d46a9deda6a24"
	I0116 03:23:20.304331 1011955 cri.go:89] found id: ""
	I0116 03:23:20.304342 1011955 logs.go:284] 1 containers: [94ed68f3d4f24313e6a1fce6e4b90b946fd2225aac302fced35d46a9deda6a24]
	I0116 03:23:20.304410 1011955 ssh_runner.go:195] Run: which crictl
	I0116 03:23:20.309509 1011955 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0116 03:23:20.309599 1011955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0116 03:23:20.353692 1011955 cri.go:89] found id: "c4fca60077d67415eed36ea7fec522e468538059f422909136109bb3049be2c8"
	I0116 03:23:20.353721 1011955 cri.go:89] found id: ""
	I0116 03:23:20.353731 1011955 logs.go:284] 1 containers: [c4fca60077d67415eed36ea7fec522e468538059f422909136109bb3049be2c8]
	I0116 03:23:20.353816 1011955 ssh_runner.go:195] Run: which crictl
	I0116 03:23:20.358894 1011955 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0116 03:23:20.358978 1011955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0116 03:23:20.409337 1011955 cri.go:89] found id: "8c87760cc0f44eb99f0cb2d610478aa2e69f84449726d201a30521127a679760"
	I0116 03:23:20.409364 1011955 cri.go:89] found id: ""
	I0116 03:23:20.409388 1011955 logs.go:284] 1 containers: [8c87760cc0f44eb99f0cb2d610478aa2e69f84449726d201a30521127a679760]
	I0116 03:23:20.409462 1011955 ssh_runner.go:195] Run: which crictl
	I0116 03:23:20.414337 1011955 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0116 03:23:20.414422 1011955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0116 03:23:20.458585 1011955 cri.go:89] found id: "19ca9f9fb82670ac732e41c9fb8af63d1fad24065000c55e0a0eb9ddae08738e"
	I0116 03:23:20.458613 1011955 cri.go:89] found id: ""
	I0116 03:23:20.458621 1011955 logs.go:284] 1 containers: [19ca9f9fb82670ac732e41c9fb8af63d1fad24065000c55e0a0eb9ddae08738e]
	I0116 03:23:20.458688 1011955 ssh_runner.go:195] Run: which crictl
	I0116 03:23:20.463813 1011955 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0116 03:23:20.463899 1011955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0116 03:23:20.514696 1011955 cri.go:89] found id: "cd75d2109b8821b8657f2dd6c4f2c4ca736d012c15668e44c34657f674fd675f"
	I0116 03:23:20.514729 1011955 cri.go:89] found id: ""
	I0116 03:23:20.514740 1011955 logs.go:284] 1 containers: [cd75d2109b8821b8657f2dd6c4f2c4ca736d012c15668e44c34657f674fd675f]
	I0116 03:23:20.514813 1011955 ssh_runner.go:195] Run: which crictl
	I0116 03:23:20.520195 1011955 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0116 03:23:20.520289 1011955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0116 03:23:17.546020 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:23:19.546663 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:23:22.046331 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:23:20.563280 1011955 cri.go:89] found id: "7cde9c38c1e733d3e9e470f1180254a447d2686e0139615245efc3f8e8a06e45"
	I0116 03:23:20.563313 1011955 cri.go:89] found id: ""
	I0116 03:23:20.563325 1011955 logs.go:284] 1 containers: [7cde9c38c1e733d3e9e470f1180254a447d2686e0139615245efc3f8e8a06e45]
	I0116 03:23:20.563392 1011955 ssh_runner.go:195] Run: which crictl
	I0116 03:23:20.572063 1011955 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0116 03:23:20.572143 1011955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0116 03:23:20.610050 1011955 cri.go:89] found id: ""
	I0116 03:23:20.610078 1011955 logs.go:284] 0 containers: []
	W0116 03:23:20.610087 1011955 logs.go:286] No container was found matching "kindnet"
	I0116 03:23:20.610093 1011955 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0116 03:23:20.610149 1011955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0116 03:23:20.651475 1011955 cri.go:89] found id: "f2b31947cd9ab2c8bd5934451dd476eedb4489db897be097f0a19bf23818e9da"
	I0116 03:23:20.651499 1011955 cri.go:89] found id: ""
	I0116 03:23:20.651509 1011955 logs.go:284] 1 containers: [f2b31947cd9ab2c8bd5934451dd476eedb4489db897be097f0a19bf23818e9da]
	I0116 03:23:20.651575 1011955 ssh_runner.go:195] Run: which crictl
	I0116 03:23:20.656379 1011955 logs.go:123] Gathering logs for etcd [c4fca60077d67415eed36ea7fec522e468538059f422909136109bb3049be2c8] ...
	I0116 03:23:20.656405 1011955 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c4fca60077d67415eed36ea7fec522e468538059f422909136109bb3049be2c8"
	I0116 03:23:20.706726 1011955 logs.go:123] Gathering logs for kube-scheduler [19ca9f9fb82670ac732e41c9fb8af63d1fad24065000c55e0a0eb9ddae08738e] ...
	I0116 03:23:20.706762 1011955 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 19ca9f9fb82670ac732e41c9fb8af63d1fad24065000c55e0a0eb9ddae08738e"
	I0116 03:23:20.755434 1011955 logs.go:123] Gathering logs for storage-provisioner [f2b31947cd9ab2c8bd5934451dd476eedb4489db897be097f0a19bf23818e9da] ...
	I0116 03:23:20.755472 1011955 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f2b31947cd9ab2c8bd5934451dd476eedb4489db897be097f0a19bf23818e9da"
	I0116 03:23:20.796611 1011955 logs.go:123] Gathering logs for kubelet ...
	I0116 03:23:20.796649 1011955 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0116 03:23:20.888886 1011955 logs.go:138] Found kubelet problem: Jan 16 03:18:52 default-k8s-diff-port-775571 kubelet[3872]: W0116 03:18:52.493348    3872 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-775571" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-775571' and this object
	W0116 03:23:20.889106 1011955 logs.go:138] Found kubelet problem: Jan 16 03:18:52 default-k8s-diff-port-775571 kubelet[3872]: E0116 03:18:52.493417    3872 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-775571" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-775571' and this object
	I0116 03:23:20.915624 1011955 logs.go:123] Gathering logs for describe nodes ...
	I0116 03:23:20.915668 1011955 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0116 03:23:21.069499 1011955 logs.go:123] Gathering logs for kube-apiserver [94ed68f3d4f24313e6a1fce6e4b90b946fd2225aac302fced35d46a9deda6a24] ...
	I0116 03:23:21.069544 1011955 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 94ed68f3d4f24313e6a1fce6e4b90b946fd2225aac302fced35d46a9deda6a24"
	I0116 03:23:21.128642 1011955 logs.go:123] Gathering logs for kube-controller-manager [7cde9c38c1e733d3e9e470f1180254a447d2686e0139615245efc3f8e8a06e45] ...
	I0116 03:23:21.128686 1011955 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7cde9c38c1e733d3e9e470f1180254a447d2686e0139615245efc3f8e8a06e45"
	I0116 03:23:21.186151 1011955 logs.go:123] Gathering logs for CRI-O ...
	I0116 03:23:21.186204 1011955 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0116 03:23:21.586722 1011955 logs.go:123] Gathering logs for container status ...
	I0116 03:23:21.586769 1011955 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0116 03:23:21.642253 1011955 logs.go:123] Gathering logs for dmesg ...
	I0116 03:23:21.642301 1011955 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0116 03:23:21.658076 1011955 logs.go:123] Gathering logs for coredns [8c87760cc0f44eb99f0cb2d610478aa2e69f84449726d201a30521127a679760] ...
	I0116 03:23:21.658108 1011955 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8c87760cc0f44eb99f0cb2d610478aa2e69f84449726d201a30521127a679760"
	I0116 03:23:21.712191 1011955 logs.go:123] Gathering logs for kube-proxy [cd75d2109b8821b8657f2dd6c4f2c4ca736d012c15668e44c34657f674fd675f] ...
	I0116 03:23:21.712229 1011955 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cd75d2109b8821b8657f2dd6c4f2c4ca736d012c15668e44c34657f674fd675f"
	I0116 03:23:21.763632 1011955 out.go:309] Setting ErrFile to fd 2...
	I0116 03:23:21.763672 1011955 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0116 03:23:21.763767 1011955 out.go:239] X Problems detected in kubelet:
	W0116 03:23:21.763792 1011955 out.go:239]   Jan 16 03:18:52 default-k8s-diff-port-775571 kubelet[3872]: W0116 03:18:52.493348    3872 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-775571" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-775571' and this object
	W0116 03:23:21.763804 1011955 out.go:239]   Jan 16 03:18:52 default-k8s-diff-port-775571 kubelet[3872]: E0116 03:18:52.493417    3872 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-775571" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-775571' and this object
	I0116 03:23:21.763816 1011955 out.go:309] Setting ErrFile to fd 2...
	I0116 03:23:21.763826 1011955 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 03:23:24.046962 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:23:26.544587 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:23:31.774617 1011955 system_pods.go:59] 8 kube-system pods found
	I0116 03:23:31.774653 1011955 system_pods.go:61] "coredns-5dd5756b68-mk795" [b928a6ae-07af-4bc4-a0c5-b3027730459c] Running
	I0116 03:23:31.774660 1011955 system_pods.go:61] "etcd-default-k8s-diff-port-775571" [1ec6d1b7-1c79-436f-bc2c-7f25d7b35d40] Running
	I0116 03:23:31.774664 1011955 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-775571" [0085c55b-c122-41dc-ab1b-e1110606563d] Running
	I0116 03:23:31.774670 1011955 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-775571" [57f644e6-74c4-4de5-a725-5dc2e049a78a] Running
	I0116 03:23:31.774677 1011955 system_pods.go:61] "kube-proxy-zw495" [d2f3b2b8-ba3f-49d0-b12b-9e78c5867c09] Running
	I0116 03:23:31.774683 1011955 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-775571" [8b024142-545b-46c1-babc-f0a544d2debc] Running
	I0116 03:23:31.774694 1011955 system_pods.go:61] "metrics-server-57f55c9bc5-928d7" [d3671063-27a1-4ad8-9f5f-b3e01205f483] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:23:31.774709 1011955 system_pods.go:61] "storage-provisioner" [8c309131-3f2c-411d-9876-05424a2c3b79] Running
	I0116 03:23:31.774720 1011955 system_pods.go:74] duration metric: took 11.517244217s to wait for pod list to return data ...
	I0116 03:23:31.774733 1011955 default_sa.go:34] waiting for default service account to be created ...
	I0116 03:23:31.777691 1011955 default_sa.go:45] found service account: "default"
	I0116 03:23:31.777717 1011955 default_sa.go:55] duration metric: took 2.971824ms for default service account to be created ...
	I0116 03:23:31.777725 1011955 system_pods.go:116] waiting for k8s-apps to be running ...
	I0116 03:23:31.784992 1011955 system_pods.go:86] 8 kube-system pods found
	I0116 03:23:31.785020 1011955 system_pods.go:89] "coredns-5dd5756b68-mk795" [b928a6ae-07af-4bc4-a0c5-b3027730459c] Running
	I0116 03:23:31.785027 1011955 system_pods.go:89] "etcd-default-k8s-diff-port-775571" [1ec6d1b7-1c79-436f-bc2c-7f25d7b35d40] Running
	I0116 03:23:31.785032 1011955 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-775571" [0085c55b-c122-41dc-ab1b-e1110606563d] Running
	I0116 03:23:31.785036 1011955 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-775571" [57f644e6-74c4-4de5-a725-5dc2e049a78a] Running
	I0116 03:23:31.785041 1011955 system_pods.go:89] "kube-proxy-zw495" [d2f3b2b8-ba3f-49d0-b12b-9e78c5867c09] Running
	I0116 03:23:31.785045 1011955 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-775571" [8b024142-545b-46c1-babc-f0a544d2debc] Running
	I0116 03:23:31.785053 1011955 system_pods.go:89] "metrics-server-57f55c9bc5-928d7" [d3671063-27a1-4ad8-9f5f-b3e01205f483] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:23:31.785058 1011955 system_pods.go:89] "storage-provisioner" [8c309131-3f2c-411d-9876-05424a2c3b79] Running
	I0116 03:23:31.785066 1011955 system_pods.go:126] duration metric: took 7.335258ms to wait for k8s-apps to be running ...
	I0116 03:23:31.785075 1011955 system_svc.go:44] waiting for kubelet service to be running ....
	I0116 03:23:31.785125 1011955 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 03:23:31.801767 1011955 system_svc.go:56] duration metric: took 16.666559ms WaitForService to wait for kubelet.
	I0116 03:23:31.801797 1011955 kubeadm.go:581] duration metric: took 4m38.631327454s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0116 03:23:31.801841 1011955 node_conditions.go:102] verifying NodePressure condition ...
	I0116 03:23:31.805655 1011955 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 03:23:31.805721 1011955 node_conditions.go:123] node cpu capacity is 2
	I0116 03:23:31.805773 1011955 node_conditions.go:105] duration metric: took 3.924567ms to run NodePressure ...
	I0116 03:23:31.805791 1011955 start.go:228] waiting for startup goroutines ...
	I0116 03:23:31.805822 1011955 start.go:233] waiting for cluster config update ...
	I0116 03:23:31.805842 1011955 start.go:242] writing updated cluster config ...
	I0116 03:23:31.806160 1011955 ssh_runner.go:195] Run: rm -f paused
	I0116 03:23:31.863603 1011955 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I0116 03:23:31.865992 1011955 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-775571" cluster and "default" namespace by default
	I0116 03:23:28.545733 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:23:30.051002 1011460 pod_ready.go:81] duration metric: took 4m0.013925231s waiting for pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace to be "Ready" ...
	E0116 03:23:30.051029 1011460 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0116 03:23:30.051040 1011460 pod_ready.go:38] duration metric: took 4m3.438310266s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:23:30.051073 1011460 api_server.go:52] waiting for apiserver process to appear ...
	I0116 03:23:30.051111 1011460 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0116 03:23:30.051173 1011460 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0116 03:23:30.118195 1011460 cri.go:89] found id: "f2403bf8a85e789f4307c4ad0dc429903a20a496b9a9e4ee152f4fb45edeaf5e"
	I0116 03:23:30.118230 1011460 cri.go:89] found id: ""
	I0116 03:23:30.118241 1011460 logs.go:284] 1 containers: [f2403bf8a85e789f4307c4ad0dc429903a20a496b9a9e4ee152f4fb45edeaf5e]
	I0116 03:23:30.118325 1011460 ssh_runner.go:195] Run: which crictl
	I0116 03:23:30.124760 1011460 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0116 03:23:30.124844 1011460 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0116 03:23:30.193482 1011460 cri.go:89] found id: "2abc1d37662ffc8c141014651481ef2b375ac1c303eff20d0fcb56cd604a2f8f"
	I0116 03:23:30.193512 1011460 cri.go:89] found id: ""
	I0116 03:23:30.193522 1011460 logs.go:284] 1 containers: [2abc1d37662ffc8c141014651481ef2b375ac1c303eff20d0fcb56cd604a2f8f]
	I0116 03:23:30.193586 1011460 ssh_runner.go:195] Run: which crictl
	I0116 03:23:30.201066 1011460 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0116 03:23:30.201155 1011460 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0116 03:23:30.265943 1011460 cri.go:89] found id: "229310b5851cfb28499024703302e9d76a32a5a7dd165d38163bd4a7e4457058"
	I0116 03:23:30.265979 1011460 cri.go:89] found id: ""
	I0116 03:23:30.265991 1011460 logs.go:284] 1 containers: [229310b5851cfb28499024703302e9d76a32a5a7dd165d38163bd4a7e4457058]
	I0116 03:23:30.266071 1011460 ssh_runner.go:195] Run: which crictl
	I0116 03:23:30.271404 1011460 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0116 03:23:30.271498 1011460 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0116 03:23:30.315307 1011460 cri.go:89] found id: "63e8de06e9ec3ccb1ecfe2142fb84f65efabdec07da804a74ec55b57f677b021"
	I0116 03:23:30.315336 1011460 cri.go:89] found id: ""
	I0116 03:23:30.315346 1011460 logs.go:284] 1 containers: [63e8de06e9ec3ccb1ecfe2142fb84f65efabdec07da804a74ec55b57f677b021]
	I0116 03:23:30.315422 1011460 ssh_runner.go:195] Run: which crictl
	I0116 03:23:30.321045 1011460 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0116 03:23:30.321118 1011460 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0116 03:23:30.370734 1011460 cri.go:89] found id: "153d0b659aaa8533b9004f341817ab607005ff6cb1680ef017005a00267ffda8"
	I0116 03:23:30.370760 1011460 cri.go:89] found id: ""
	I0116 03:23:30.370770 1011460 logs.go:284] 1 containers: [153d0b659aaa8533b9004f341817ab607005ff6cb1680ef017005a00267ffda8]
	I0116 03:23:30.370821 1011460 ssh_runner.go:195] Run: which crictl
	I0116 03:23:30.375705 1011460 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0116 03:23:30.375785 1011460 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0116 03:23:30.415457 1011460 cri.go:89] found id: "997be6a446a806235150d69a5095c196d7a47ada007ba33985478f573234106d"
	I0116 03:23:30.415487 1011460 cri.go:89] found id: ""
	I0116 03:23:30.415498 1011460 logs.go:284] 1 containers: [997be6a446a806235150d69a5095c196d7a47ada007ba33985478f573234106d]
	I0116 03:23:30.415569 1011460 ssh_runner.go:195] Run: which crictl
	I0116 03:23:30.420117 1011460 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0116 03:23:30.420209 1011460 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0116 03:23:30.461056 1011460 cri.go:89] found id: ""
	I0116 03:23:30.461093 1011460 logs.go:284] 0 containers: []
	W0116 03:23:30.461105 1011460 logs.go:286] No container was found matching "kindnet"
	I0116 03:23:30.461114 1011460 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0116 03:23:30.461186 1011460 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0116 03:23:30.504581 1011460 cri.go:89] found id: "4a915cd4aa42fcabfc0d11618fe4a45d4b28fcbaec26574a133eea4ed0527d19"
	I0116 03:23:30.504616 1011460 cri.go:89] found id: ""
	I0116 03:23:30.504627 1011460 logs.go:284] 1 containers: [4a915cd4aa42fcabfc0d11618fe4a45d4b28fcbaec26574a133eea4ed0527d19]
	I0116 03:23:30.504698 1011460 ssh_runner.go:195] Run: which crictl
	I0116 03:23:30.509619 1011460 logs.go:123] Gathering logs for coredns [229310b5851cfb28499024703302e9d76a32a5a7dd165d38163bd4a7e4457058] ...
	I0116 03:23:30.509670 1011460 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 229310b5851cfb28499024703302e9d76a32a5a7dd165d38163bd4a7e4457058"
	I0116 03:23:30.553986 1011460 logs.go:123] Gathering logs for kube-controller-manager [997be6a446a806235150d69a5095c196d7a47ada007ba33985478f573234106d] ...
	I0116 03:23:30.554027 1011460 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 997be6a446a806235150d69a5095c196d7a47ada007ba33985478f573234106d"
	I0116 03:23:30.613360 1011460 logs.go:123] Gathering logs for CRI-O ...
	I0116 03:23:30.613415 1011460 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0116 03:23:31.049281 1011460 logs.go:123] Gathering logs for dmesg ...
	I0116 03:23:31.049331 1011460 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0116 03:23:31.067692 1011460 logs.go:123] Gathering logs for describe nodes ...
	I0116 03:23:31.067732 1011460 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0116 03:23:31.225415 1011460 logs.go:123] Gathering logs for kube-apiserver [f2403bf8a85e789f4307c4ad0dc429903a20a496b9a9e4ee152f4fb45edeaf5e] ...
	I0116 03:23:31.225457 1011460 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f2403bf8a85e789f4307c4ad0dc429903a20a496b9a9e4ee152f4fb45edeaf5e"
	I0116 03:23:31.288824 1011460 logs.go:123] Gathering logs for etcd [2abc1d37662ffc8c141014651481ef2b375ac1c303eff20d0fcb56cd604a2f8f] ...
	I0116 03:23:31.288865 1011460 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2abc1d37662ffc8c141014651481ef2b375ac1c303eff20d0fcb56cd604a2f8f"
	I0116 03:23:31.349273 1011460 logs.go:123] Gathering logs for container status ...
	I0116 03:23:31.349318 1011460 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0116 03:23:31.398655 1011460 logs.go:123] Gathering logs for kubelet ...
	I0116 03:23:31.398696 1011460 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0116 03:23:31.469496 1011460 logs.go:138] Found kubelet problem: Jan 16 03:19:25 no-preload-934668 kubelet[4294]: W0116 03:19:25.274014    4294 reflector.go:539] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-934668" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-934668' and this object
	W0116 03:23:31.469683 1011460 logs.go:138] Found kubelet problem: Jan 16 03:19:25 no-preload-934668 kubelet[4294]: E0116 03:19:25.274058    4294 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-934668" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-934668' and this object
	W0116 03:23:31.469882 1011460 logs.go:138] Found kubelet problem: Jan 16 03:19:25 no-preload-934668 kubelet[4294]: W0116 03:19:25.277138    4294 reflector.go:539] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-934668" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-934668' and this object
	W0116 03:23:31.470041 1011460 logs.go:138] Found kubelet problem: Jan 16 03:19:25 no-preload-934668 kubelet[4294]: E0116 03:19:25.277170    4294 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-934668" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-934668' and this object
	I0116 03:23:31.493488 1011460 logs.go:123] Gathering logs for kube-scheduler [63e8de06e9ec3ccb1ecfe2142fb84f65efabdec07da804a74ec55b57f677b021] ...
	I0116 03:23:31.493533 1011460 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 63e8de06e9ec3ccb1ecfe2142fb84f65efabdec07da804a74ec55b57f677b021"
	I0116 03:23:31.551159 1011460 logs.go:123] Gathering logs for kube-proxy [153d0b659aaa8533b9004f341817ab607005ff6cb1680ef017005a00267ffda8] ...
	I0116 03:23:31.551200 1011460 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 153d0b659aaa8533b9004f341817ab607005ff6cb1680ef017005a00267ffda8"
	I0116 03:23:31.590293 1011460 logs.go:123] Gathering logs for storage-provisioner [4a915cd4aa42fcabfc0d11618fe4a45d4b28fcbaec26574a133eea4ed0527d19] ...
	I0116 03:23:31.590434 1011460 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4a915cd4aa42fcabfc0d11618fe4a45d4b28fcbaec26574a133eea4ed0527d19"
	I0116 03:23:31.634337 1011460 out.go:309] Setting ErrFile to fd 2...
	I0116 03:23:31.634367 1011460 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0116 03:23:31.634430 1011460 out.go:239] X Problems detected in kubelet:
	W0116 03:23:31.634447 1011460 out.go:239]   Jan 16 03:19:25 no-preload-934668 kubelet[4294]: W0116 03:19:25.274014    4294 reflector.go:539] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-934668" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-934668' and this object
	W0116 03:23:31.634457 1011460 out.go:239]   Jan 16 03:19:25 no-preload-934668 kubelet[4294]: E0116 03:19:25.274058    4294 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-934668" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-934668' and this object
	W0116 03:23:31.634471 1011460 out.go:239]   Jan 16 03:19:25 no-preload-934668 kubelet[4294]: W0116 03:19:25.277138    4294 reflector.go:539] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-934668" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-934668' and this object
	W0116 03:23:31.634476 1011460 out.go:239]   Jan 16 03:19:25 no-preload-934668 kubelet[4294]: E0116 03:19:25.277170    4294 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-934668" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-934668' and this object
	I0116 03:23:31.634485 1011460 out.go:309] Setting ErrFile to fd 2...
	I0116 03:23:31.634490 1011460 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 03:23:41.635544 1011460 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:23:41.654207 1011460 api_server.go:72] duration metric: took 4m16.125890122s to wait for apiserver process to appear ...
	I0116 03:23:41.654244 1011460 api_server.go:88] waiting for apiserver healthz status ...
	I0116 03:23:41.654312 1011460 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0116 03:23:41.654391 1011460 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0116 03:23:41.704947 1011460 cri.go:89] found id: "f2403bf8a85e789f4307c4ad0dc429903a20a496b9a9e4ee152f4fb45edeaf5e"
	I0116 03:23:41.704976 1011460 cri.go:89] found id: ""
	I0116 03:23:41.704984 1011460 logs.go:284] 1 containers: [f2403bf8a85e789f4307c4ad0dc429903a20a496b9a9e4ee152f4fb45edeaf5e]
	I0116 03:23:41.705042 1011460 ssh_runner.go:195] Run: which crictl
	I0116 03:23:41.710602 1011460 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0116 03:23:41.710687 1011460 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0116 03:23:41.754322 1011460 cri.go:89] found id: "2abc1d37662ffc8c141014651481ef2b375ac1c303eff20d0fcb56cd604a2f8f"
	I0116 03:23:41.754356 1011460 cri.go:89] found id: ""
	I0116 03:23:41.754368 1011460 logs.go:284] 1 containers: [2abc1d37662ffc8c141014651481ef2b375ac1c303eff20d0fcb56cd604a2f8f]
	I0116 03:23:41.754437 1011460 ssh_runner.go:195] Run: which crictl
	I0116 03:23:41.760172 1011460 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0116 03:23:41.760283 1011460 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0116 03:23:41.810626 1011460 cri.go:89] found id: "229310b5851cfb28499024703302e9d76a32a5a7dd165d38163bd4a7e4457058"
	I0116 03:23:41.810664 1011460 cri.go:89] found id: ""
	I0116 03:23:41.810674 1011460 logs.go:284] 1 containers: [229310b5851cfb28499024703302e9d76a32a5a7dd165d38163bd4a7e4457058]
	I0116 03:23:41.810749 1011460 ssh_runner.go:195] Run: which crictl
	I0116 03:23:41.815588 1011460 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0116 03:23:41.815687 1011460 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0116 03:23:41.859547 1011460 cri.go:89] found id: "63e8de06e9ec3ccb1ecfe2142fb84f65efabdec07da804a74ec55b57f677b021"
	I0116 03:23:41.859573 1011460 cri.go:89] found id: ""
	I0116 03:23:41.859580 1011460 logs.go:284] 1 containers: [63e8de06e9ec3ccb1ecfe2142fb84f65efabdec07da804a74ec55b57f677b021]
	I0116 03:23:41.859637 1011460 ssh_runner.go:195] Run: which crictl
	I0116 03:23:41.864333 1011460 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0116 03:23:41.864416 1011460 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0116 03:23:41.914604 1011460 cri.go:89] found id: "153d0b659aaa8533b9004f341817ab607005ff6cb1680ef017005a00267ffda8"
	I0116 03:23:41.914638 1011460 cri.go:89] found id: ""
	I0116 03:23:41.914648 1011460 logs.go:284] 1 containers: [153d0b659aaa8533b9004f341817ab607005ff6cb1680ef017005a00267ffda8]
	I0116 03:23:41.914718 1011460 ssh_runner.go:195] Run: which crictl
	I0116 03:23:41.919459 1011460 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0116 03:23:41.919538 1011460 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0116 03:23:41.965709 1011460 cri.go:89] found id: "997be6a446a806235150d69a5095c196d7a47ada007ba33985478f573234106d"
	I0116 03:23:41.965751 1011460 cri.go:89] found id: ""
	I0116 03:23:41.965763 1011460 logs.go:284] 1 containers: [997be6a446a806235150d69a5095c196d7a47ada007ba33985478f573234106d]
	I0116 03:23:41.965857 1011460 ssh_runner.go:195] Run: which crictl
	I0116 03:23:41.970346 1011460 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0116 03:23:41.970445 1011460 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0116 03:23:42.017222 1011460 cri.go:89] found id: ""
	I0116 03:23:42.017253 1011460 logs.go:284] 0 containers: []
	W0116 03:23:42.017265 1011460 logs.go:286] No container was found matching "kindnet"
	I0116 03:23:42.017275 1011460 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0116 03:23:42.017341 1011460 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0116 03:23:42.065935 1011460 cri.go:89] found id: "4a915cd4aa42fcabfc0d11618fe4a45d4b28fcbaec26574a133eea4ed0527d19"
	I0116 03:23:42.065967 1011460 cri.go:89] found id: ""
	I0116 03:23:42.065977 1011460 logs.go:284] 1 containers: [4a915cd4aa42fcabfc0d11618fe4a45d4b28fcbaec26574a133eea4ed0527d19]
	I0116 03:23:42.066041 1011460 ssh_runner.go:195] Run: which crictl
	I0116 03:23:42.070695 1011460 logs.go:123] Gathering logs for CRI-O ...
	I0116 03:23:42.070722 1011460 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0116 03:23:42.440423 1011460 logs.go:123] Gathering logs for kubelet ...
	I0116 03:23:42.440483 1011460 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0116 03:23:42.514598 1011460 logs.go:138] Found kubelet problem: Jan 16 03:19:25 no-preload-934668 kubelet[4294]: W0116 03:19:25.274014    4294 reflector.go:539] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-934668" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-934668' and this object
	W0116 03:23:42.514770 1011460 logs.go:138] Found kubelet problem: Jan 16 03:19:25 no-preload-934668 kubelet[4294]: E0116 03:19:25.274058    4294 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-934668" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-934668' and this object
	W0116 03:23:42.514914 1011460 logs.go:138] Found kubelet problem: Jan 16 03:19:25 no-preload-934668 kubelet[4294]: W0116 03:19:25.277138    4294 reflector.go:539] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-934668" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-934668' and this object
	W0116 03:23:42.515087 1011460 logs.go:138] Found kubelet problem: Jan 16 03:19:25 no-preload-934668 kubelet[4294]: E0116 03:19:25.277170    4294 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-934668" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-934668' and this object
	I0116 03:23:42.539532 1011460 logs.go:123] Gathering logs for describe nodes ...
	I0116 03:23:42.539575 1011460 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0116 03:23:42.708733 1011460 logs.go:123] Gathering logs for coredns [229310b5851cfb28499024703302e9d76a32a5a7dd165d38163bd4a7e4457058] ...
	I0116 03:23:42.708775 1011460 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 229310b5851cfb28499024703302e9d76a32a5a7dd165d38163bd4a7e4457058"
	I0116 03:23:42.792841 1011460 logs.go:123] Gathering logs for kube-scheduler [63e8de06e9ec3ccb1ecfe2142fb84f65efabdec07da804a74ec55b57f677b021] ...
	I0116 03:23:42.792886 1011460 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 63e8de06e9ec3ccb1ecfe2142fb84f65efabdec07da804a74ec55b57f677b021"
	I0116 03:23:42.860086 1011460 logs.go:123] Gathering logs for kube-proxy [153d0b659aaa8533b9004f341817ab607005ff6cb1680ef017005a00267ffda8] ...
	I0116 03:23:42.860130 1011460 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 153d0b659aaa8533b9004f341817ab607005ff6cb1680ef017005a00267ffda8"
	I0116 03:23:42.906116 1011460 logs.go:123] Gathering logs for kube-controller-manager [997be6a446a806235150d69a5095c196d7a47ada007ba33985478f573234106d] ...
	I0116 03:23:42.906156 1011460 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 997be6a446a806235150d69a5095c196d7a47ada007ba33985478f573234106d"
	I0116 03:23:42.962172 1011460 logs.go:123] Gathering logs for storage-provisioner [4a915cd4aa42fcabfc0d11618fe4a45d4b28fcbaec26574a133eea4ed0527d19] ...
	I0116 03:23:42.962220 1011460 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4a915cd4aa42fcabfc0d11618fe4a45d4b28fcbaec26574a133eea4ed0527d19"
	I0116 03:23:43.001097 1011460 logs.go:123] Gathering logs for dmesg ...
	I0116 03:23:43.001133 1011460 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0116 03:23:43.017487 1011460 logs.go:123] Gathering logs for kube-apiserver [f2403bf8a85e789f4307c4ad0dc429903a20a496b9a9e4ee152f4fb45edeaf5e] ...
	I0116 03:23:43.017533 1011460 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f2403bf8a85e789f4307c4ad0dc429903a20a496b9a9e4ee152f4fb45edeaf5e"
	I0116 03:23:43.077368 1011460 logs.go:123] Gathering logs for etcd [2abc1d37662ffc8c141014651481ef2b375ac1c303eff20d0fcb56cd604a2f8f] ...
	I0116 03:23:43.077408 1011460 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2abc1d37662ffc8c141014651481ef2b375ac1c303eff20d0fcb56cd604a2f8f"
	I0116 03:23:43.125553 1011460 logs.go:123] Gathering logs for container status ...
	I0116 03:23:43.125587 1011460 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0116 03:23:43.175165 1011460 out.go:309] Setting ErrFile to fd 2...
	I0116 03:23:43.175195 1011460 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0116 03:23:43.175256 1011460 out.go:239] X Problems detected in kubelet:
	W0116 03:23:43.175268 1011460 out.go:239]   Jan 16 03:19:25 no-preload-934668 kubelet[4294]: W0116 03:19:25.274014    4294 reflector.go:539] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-934668" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-934668' and this object
	W0116 03:23:43.175279 1011460 out.go:239]   Jan 16 03:19:25 no-preload-934668 kubelet[4294]: E0116 03:19:25.274058    4294 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-934668" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-934668' and this object
	W0116 03:23:43.175292 1011460 out.go:239]   Jan 16 03:19:25 no-preload-934668 kubelet[4294]: W0116 03:19:25.277138    4294 reflector.go:539] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-934668" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-934668' and this object
	W0116 03:23:43.175300 1011460 out.go:239]   Jan 16 03:19:25 no-preload-934668 kubelet[4294]: E0116 03:19:25.277170    4294 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-934668" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-934668' and this object
	I0116 03:23:43.175308 1011460 out.go:309] Setting ErrFile to fd 2...
	I0116 03:23:43.175316 1011460 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 03:23:53.176994 1011460 api_server.go:253] Checking apiserver healthz at https://192.168.50.29:8443/healthz ...
	I0116 03:23:53.183515 1011460 api_server.go:279] https://192.168.50.29:8443/healthz returned 200:
	ok
	I0116 03:23:53.185020 1011460 api_server.go:141] control plane version: v1.29.0-rc.2
	I0116 03:23:53.185050 1011460 api_server.go:131] duration metric: took 11.530797787s to wait for apiserver health ...
	I0116 03:23:53.185061 1011460 system_pods.go:43] waiting for kube-system pods to appear ...
	I0116 03:23:53.185092 1011460 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0116 03:23:53.185148 1011460 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0116 03:23:53.234245 1011460 cri.go:89] found id: "f2403bf8a85e789f4307c4ad0dc429903a20a496b9a9e4ee152f4fb45edeaf5e"
	I0116 03:23:53.234274 1011460 cri.go:89] found id: ""
	I0116 03:23:53.234284 1011460 logs.go:284] 1 containers: [f2403bf8a85e789f4307c4ad0dc429903a20a496b9a9e4ee152f4fb45edeaf5e]
	I0116 03:23:53.234356 1011460 ssh_runner.go:195] Run: which crictl
	I0116 03:23:53.239078 1011460 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0116 03:23:53.239169 1011460 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0116 03:23:53.286989 1011460 cri.go:89] found id: "2abc1d37662ffc8c141014651481ef2b375ac1c303eff20d0fcb56cd604a2f8f"
	I0116 03:23:53.287021 1011460 cri.go:89] found id: ""
	I0116 03:23:53.287031 1011460 logs.go:284] 1 containers: [2abc1d37662ffc8c141014651481ef2b375ac1c303eff20d0fcb56cd604a2f8f]
	I0116 03:23:53.287106 1011460 ssh_runner.go:195] Run: which crictl
	I0116 03:23:53.291809 1011460 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0116 03:23:53.291898 1011460 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0116 03:23:53.342514 1011460 cri.go:89] found id: "229310b5851cfb28499024703302e9d76a32a5a7dd165d38163bd4a7e4457058"
	I0116 03:23:53.342549 1011460 cri.go:89] found id: ""
	I0116 03:23:53.342560 1011460 logs.go:284] 1 containers: [229310b5851cfb28499024703302e9d76a32a5a7dd165d38163bd4a7e4457058]
	I0116 03:23:53.342644 1011460 ssh_runner.go:195] Run: which crictl
	I0116 03:23:53.347443 1011460 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0116 03:23:53.347536 1011460 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0116 03:23:53.407101 1011460 cri.go:89] found id: "63e8de06e9ec3ccb1ecfe2142fb84f65efabdec07da804a74ec55b57f677b021"
	I0116 03:23:53.407129 1011460 cri.go:89] found id: ""
	I0116 03:23:53.407139 1011460 logs.go:284] 1 containers: [63e8de06e9ec3ccb1ecfe2142fb84f65efabdec07da804a74ec55b57f677b021]
	I0116 03:23:53.407204 1011460 ssh_runner.go:195] Run: which crictl
	I0116 03:23:53.411444 1011460 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0116 03:23:53.411526 1011460 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0116 03:23:53.451514 1011460 cri.go:89] found id: "153d0b659aaa8533b9004f341817ab607005ff6cb1680ef017005a00267ffda8"
	I0116 03:23:53.451538 1011460 cri.go:89] found id: ""
	I0116 03:23:53.451545 1011460 logs.go:284] 1 containers: [153d0b659aaa8533b9004f341817ab607005ff6cb1680ef017005a00267ffda8]
	I0116 03:23:53.451613 1011460 ssh_runner.go:195] Run: which crictl
	I0116 03:23:53.455819 1011460 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0116 03:23:53.455907 1011460 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0116 03:23:53.498341 1011460 cri.go:89] found id: "997be6a446a806235150d69a5095c196d7a47ada007ba33985478f573234106d"
	I0116 03:23:53.498372 1011460 cri.go:89] found id: ""
	I0116 03:23:53.498385 1011460 logs.go:284] 1 containers: [997be6a446a806235150d69a5095c196d7a47ada007ba33985478f573234106d]
	I0116 03:23:53.498456 1011460 ssh_runner.go:195] Run: which crictl
	I0116 03:23:53.503007 1011460 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0116 03:23:53.503075 1011460 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0116 03:23:53.549549 1011460 cri.go:89] found id: ""
	I0116 03:23:53.549585 1011460 logs.go:284] 0 containers: []
	W0116 03:23:53.549597 1011460 logs.go:286] No container was found matching "kindnet"
	I0116 03:23:53.549606 1011460 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0116 03:23:53.549676 1011460 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0116 03:23:53.590624 1011460 cri.go:89] found id: "4a915cd4aa42fcabfc0d11618fe4a45d4b28fcbaec26574a133eea4ed0527d19"
	I0116 03:23:53.590655 1011460 cri.go:89] found id: ""
	I0116 03:23:53.590672 1011460 logs.go:284] 1 containers: [4a915cd4aa42fcabfc0d11618fe4a45d4b28fcbaec26574a133eea4ed0527d19]
	I0116 03:23:53.590743 1011460 ssh_runner.go:195] Run: which crictl
	I0116 03:23:53.594912 1011460 logs.go:123] Gathering logs for etcd [2abc1d37662ffc8c141014651481ef2b375ac1c303eff20d0fcb56cd604a2f8f] ...
	I0116 03:23:53.594950 1011460 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2abc1d37662ffc8c141014651481ef2b375ac1c303eff20d0fcb56cd604a2f8f"
	I0116 03:23:53.644842 1011460 logs.go:123] Gathering logs for CRI-O ...
	I0116 03:23:53.644885 1011460 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0116 03:23:54.036154 1011460 logs.go:123] Gathering logs for container status ...
	I0116 03:23:54.036221 1011460 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0116 03:23:54.096374 1011460 logs.go:123] Gathering logs for kubelet ...
	I0116 03:23:54.096416 1011460 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0116 03:23:54.170840 1011460 logs.go:138] Found kubelet problem: Jan 16 03:19:25 no-preload-934668 kubelet[4294]: W0116 03:19:25.274014    4294 reflector.go:539] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-934668" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-934668' and this object
	W0116 03:23:54.171084 1011460 logs.go:138] Found kubelet problem: Jan 16 03:19:25 no-preload-934668 kubelet[4294]: E0116 03:19:25.274058    4294 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-934668" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-934668' and this object
	W0116 03:23:54.171231 1011460 logs.go:138] Found kubelet problem: Jan 16 03:19:25 no-preload-934668 kubelet[4294]: W0116 03:19:25.277138    4294 reflector.go:539] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-934668" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-934668' and this object
	W0116 03:23:54.171388 1011460 logs.go:138] Found kubelet problem: Jan 16 03:19:25 no-preload-934668 kubelet[4294]: E0116 03:19:25.277170    4294 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-934668" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-934668' and this object
	I0116 03:23:54.197037 1011460 logs.go:123] Gathering logs for kube-apiserver [f2403bf8a85e789f4307c4ad0dc429903a20a496b9a9e4ee152f4fb45edeaf5e] ...
	I0116 03:23:54.197086 1011460 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f2403bf8a85e789f4307c4ad0dc429903a20a496b9a9e4ee152f4fb45edeaf5e"
	I0116 03:23:54.254502 1011460 logs.go:123] Gathering logs for coredns [229310b5851cfb28499024703302e9d76a32a5a7dd165d38163bd4a7e4457058] ...
	I0116 03:23:54.254558 1011460 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 229310b5851cfb28499024703302e9d76a32a5a7dd165d38163bd4a7e4457058"
	I0116 03:23:54.296951 1011460 logs.go:123] Gathering logs for kube-scheduler [63e8de06e9ec3ccb1ecfe2142fb84f65efabdec07da804a74ec55b57f677b021] ...
	I0116 03:23:54.296999 1011460 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 63e8de06e9ec3ccb1ecfe2142fb84f65efabdec07da804a74ec55b57f677b021"
	I0116 03:23:54.353946 1011460 logs.go:123] Gathering logs for kube-proxy [153d0b659aaa8533b9004f341817ab607005ff6cb1680ef017005a00267ffda8] ...
	I0116 03:23:54.354001 1011460 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 153d0b659aaa8533b9004f341817ab607005ff6cb1680ef017005a00267ffda8"
	I0116 03:23:54.399575 1011460 logs.go:123] Gathering logs for kube-controller-manager [997be6a446a806235150d69a5095c196d7a47ada007ba33985478f573234106d] ...
	I0116 03:23:54.399609 1011460 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 997be6a446a806235150d69a5095c196d7a47ada007ba33985478f573234106d"
	I0116 03:23:54.463603 1011460 logs.go:123] Gathering logs for storage-provisioner [4a915cd4aa42fcabfc0d11618fe4a45d4b28fcbaec26574a133eea4ed0527d19] ...
	I0116 03:23:54.463643 1011460 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4a915cd4aa42fcabfc0d11618fe4a45d4b28fcbaec26574a133eea4ed0527d19"
	I0116 03:23:54.508557 1011460 logs.go:123] Gathering logs for dmesg ...
	I0116 03:23:54.508594 1011460 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0116 03:23:54.522542 1011460 logs.go:123] Gathering logs for describe nodes ...
	I0116 03:23:54.522574 1011460 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0116 03:23:54.653996 1011460 out.go:309] Setting ErrFile to fd 2...
	I0116 03:23:54.654029 1011460 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0116 03:23:54.654095 1011460 out.go:239] X Problems detected in kubelet:
	W0116 03:23:54.654115 1011460 out.go:239]   Jan 16 03:19:25 no-preload-934668 kubelet[4294]: W0116 03:19:25.274014    4294 reflector.go:539] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-934668" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-934668' and this object
	W0116 03:23:54.654128 1011460 out.go:239]   Jan 16 03:19:25 no-preload-934668 kubelet[4294]: E0116 03:19:25.274058    4294 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-934668" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-934668' and this object
	W0116 03:23:54.654140 1011460 out.go:239]   Jan 16 03:19:25 no-preload-934668 kubelet[4294]: W0116 03:19:25.277138    4294 reflector.go:539] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-934668" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-934668' and this object
	W0116 03:23:54.654148 1011460 out.go:239]   Jan 16 03:19:25 no-preload-934668 kubelet[4294]: E0116 03:19:25.277170    4294 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-934668" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-934668' and this object
	I0116 03:23:54.654158 1011460 out.go:309] Setting ErrFile to fd 2...
	I0116 03:23:54.654167 1011460 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 03:24:04.664925 1011460 system_pods.go:59] 8 kube-system pods found
	I0116 03:24:04.664971 1011460 system_pods.go:61] "coredns-76f75df574-k2kc7" [d05aee05-aff7-4500-b656-8f66a3f622d2] Running
	I0116 03:24:04.664978 1011460 system_pods.go:61] "etcd-no-preload-934668" [b927b4df-f865-400c-9277-32778f7c5e30] Running
	I0116 03:24:04.664986 1011460 system_pods.go:61] "kube-apiserver-no-preload-934668" [648abde5-ec7c-4fd4-81e5-734ac6e631fc] Running
	I0116 03:24:04.664994 1011460 system_pods.go:61] "kube-controller-manager-no-preload-934668" [8a568dfa-e657-47e8-b369-c02a31271e58] Running
	I0116 03:24:04.664998 1011460 system_pods.go:61] "kube-proxy-fr424" [f24ae333-7f56-47bf-b66f-3192010a2cc4] Running
	I0116 03:24:04.665003 1011460 system_pods.go:61] "kube-scheduler-no-preload-934668" [fc295053-1d78-4f15-91f8-41330bf47c1a] Running
	I0116 03:24:04.665013 1011460 system_pods.go:61] "metrics-server-57f55c9bc5-6w2t7" [5169514b-c507-4e5e-b607-6806f6e32801] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:24:04.665019 1011460 system_pods.go:61] "storage-provisioner" [eb4f416a-8bdc-4a7c-bea1-14015339520b] Running
	I0116 03:24:04.665027 1011460 system_pods.go:74] duration metric: took 11.479959039s to wait for pod list to return data ...
	I0116 03:24:04.665042 1011460 default_sa.go:34] waiting for default service account to be created ...
	I0116 03:24:04.668183 1011460 default_sa.go:45] found service account: "default"
	I0116 03:24:04.668217 1011460 default_sa.go:55] duration metric: took 3.167177ms for default service account to be created ...
	I0116 03:24:04.668228 1011460 system_pods.go:116] waiting for k8s-apps to be running ...
	I0116 03:24:04.674701 1011460 system_pods.go:86] 8 kube-system pods found
	I0116 03:24:04.674736 1011460 system_pods.go:89] "coredns-76f75df574-k2kc7" [d05aee05-aff7-4500-b656-8f66a3f622d2] Running
	I0116 03:24:04.674742 1011460 system_pods.go:89] "etcd-no-preload-934668" [b927b4df-f865-400c-9277-32778f7c5e30] Running
	I0116 03:24:04.674747 1011460 system_pods.go:89] "kube-apiserver-no-preload-934668" [648abde5-ec7c-4fd4-81e5-734ac6e631fc] Running
	I0116 03:24:04.674752 1011460 system_pods.go:89] "kube-controller-manager-no-preload-934668" [8a568dfa-e657-47e8-b369-c02a31271e58] Running
	I0116 03:24:04.674756 1011460 system_pods.go:89] "kube-proxy-fr424" [f24ae333-7f56-47bf-b66f-3192010a2cc4] Running
	I0116 03:24:04.674760 1011460 system_pods.go:89] "kube-scheduler-no-preload-934668" [fc295053-1d78-4f15-91f8-41330bf47c1a] Running
	I0116 03:24:04.674766 1011460 system_pods.go:89] "metrics-server-57f55c9bc5-6w2t7" [5169514b-c507-4e5e-b607-6806f6e32801] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:24:04.674771 1011460 system_pods.go:89] "storage-provisioner" [eb4f416a-8bdc-4a7c-bea1-14015339520b] Running
	I0116 03:24:04.674780 1011460 system_pods.go:126] duration metric: took 6.545541ms to wait for k8s-apps to be running ...
	I0116 03:24:04.674794 1011460 system_svc.go:44] waiting for kubelet service to be running ....
	I0116 03:24:04.674845 1011460 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 03:24:04.692060 1011460 system_svc.go:56] duration metric: took 17.248436ms WaitForService to wait for kubelet.
	I0116 03:24:04.692099 1011460 kubeadm.go:581] duration metric: took 4m39.163790794s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0116 03:24:04.692129 1011460 node_conditions.go:102] verifying NodePressure condition ...
	I0116 03:24:04.696664 1011460 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 03:24:04.696709 1011460 node_conditions.go:123] node cpu capacity is 2
	I0116 03:24:04.696728 1011460 node_conditions.go:105] duration metric: took 4.592869ms to run NodePressure ...
	I0116 03:24:04.696745 1011460 start.go:228] waiting for startup goroutines ...
	I0116 03:24:04.696755 1011460 start.go:233] waiting for cluster config update ...
	I0116 03:24:04.696770 1011460 start.go:242] writing updated cluster config ...
	I0116 03:24:04.697135 1011460 ssh_runner.go:195] Run: rm -f paused
	I0116 03:24:04.750649 1011460 start.go:600] kubectl: 1.29.0, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0116 03:24:04.752669 1011460 out.go:177] * Done! kubectl is now configured to use "no-preload-934668" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	-- Journal begins at Tue 2024-01-16 03:12:41 UTC, ends at Tue 2024-01-16 03:26:42 UTC. --
	Jan 16 03:26:42 embed-certs-480663 crio[723]: time="2024-01-16 03:26:42.228349025Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705375602228333362,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=033084ed-248a-40ad-a83f-e719c61e17bd name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 03:26:42 embed-certs-480663 crio[723]: time="2024-01-16 03:26:42.228845470Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=b683e4cd-a58f-46ee-9433-183eceed95b5 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 03:26:42 embed-certs-480663 crio[723]: time="2024-01-16 03:26:42.228898809Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=b683e4cd-a58f-46ee-9433-183eceed95b5 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 03:26:42 embed-certs-480663 crio[723]: time="2024-01-16 03:26:42.229105113Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0f37f0f7c73398c72353bd7fa20c656f6c90dffa7c4d73d01f9c6d8804319657,PodSandboxId:5e742497bde0f88f37c8d63edf849ebec1984a658dfa6a132d1242bbf84a0acb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705374826755989756,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da59ff59-869f-48a9-a5c5-c95bb807cbcf,},Annotations:map[string]string{io.kubernetes.container.hash: 427d56c5,io.kubernetes.container.restartCount: 2,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5f11b59c21e49de34360bf58b39d8139d2062e46b02a2d693f3ea0fd10fd13b,PodSandboxId:e1e3ebeead958f5c82e6e4fdf744b4ed10ab4573850eb65ecd70bb6ce8ead286,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1705374804591127540,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b7ad1e22-9448-44d8-aee0-5170d264d3f6,},Annotations:map[string]string{io.kubernetes.container.hash: 9a679b09,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2cc211416aab65f634f78efecbcf662c33e971f6b6f1e0fb77492c1ef8e2cf70,PodSandboxId:a4bbad6c2b2c69570905e1037daed24395fadf12cc9c80716ecda8f08fc1e5b9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1705374803134072660,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-stqh5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: adbcef96-218b-42ed-9daf-72c274be0690,},Annotations:map[string]string{io.kubernetes.container.hash: e0eac2e6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"}
,{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:653a87cc5b4e5bb52f728e222fc7a58f19453b0263453d6c259e81a206ffac76,PodSandboxId:5e742497bde0f88f37c8d63edf849ebec1984a658dfa6a132d1242bbf84a0acb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1705374795550747304,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system
,io.kubernetes.pod.uid: da59ff59-869f-48a9-a5c5-c95bb807cbcf,},Annotations:map[string]string{io.kubernetes.container.hash: 427d56c5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da3ca3a9cda0a5d7ff284cd8e9e069e0ef17c913570e52561f8c7cb8be285047,PodSandboxId:9ff79d096f4801dbce33572a413be54f3c99636c38dd2f99bdf17af35cd89ac9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1705374795459731629,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j4786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aabb98a7-fe
55-4105-a5d2-c1e312464107,},Annotations:map[string]string{io.kubernetes.container.hash: aa3b13c8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab456031061355e2ec86ee36ff52b4245b5ccd2fd1f87da0318e9bbd4ca512e8,PodSandboxId:29324fa0b0c09f9749f68778cd4da177bb3207dad8e538cb88511f7b754b8a8d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1705374789321061954,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-480663,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ab7aa1bd8c13dd11
2e2029a6666906b,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36288d0c42d12139e9355a5d562d600f6065e59336e28b57558b0bf5ea3f0618,PodSandboxId:dad887fa7ce4c4d730ad4ad07900ef244bee16de0c9abab8cc98d3809bc84130,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1705374788776726935,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-480663,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b4972d069ffd7fa5b114fbba2bb2c59,},Annotations:map[string]string{io
.kubernetes.container.hash: bc32c30a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42d452ff0268fa05f5c5b20a7332caca85b7aea961642568ec84158f105b568f,PodSandboxId:9c0eabefa8e5bad7f1757d12ef76455a94e8e5a7603524e869b455b5b746f748,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1705374788574902731,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-480663,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70760e2d8cd956052088e57297a3e675,},Annotations:map[string]string{io.kubernete
s.container.hash: 9057951a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f75f023773154fd3722c861e7494aad7dd9f361e17a059735fef935507a94994,PodSandboxId:237d387addb2c96c71300d833fdc618b3cf0f45ce5dbc34e4a9f5aab7ca9cca0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1705374788509555694,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-480663,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5583cf18e2fbf50799cf889aa6297f90,},Annotations:map[
string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=b683e4cd-a58f-46ee-9433-183eceed95b5 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 03:26:42 embed-certs-480663 crio[723]: time="2024-01-16 03:26:42.272096286Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=5fdb9035-4dca-4822-8f9b-0d89ea6e9f5c name=/runtime.v1.RuntimeService/Version
	Jan 16 03:26:42 embed-certs-480663 crio[723]: time="2024-01-16 03:26:42.272253268Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=5fdb9035-4dca-4822-8f9b-0d89ea6e9f5c name=/runtime.v1.RuntimeService/Version
	Jan 16 03:26:42 embed-certs-480663 crio[723]: time="2024-01-16 03:26:42.273487766Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=b18d20bb-d66a-4e2c-b1bd-e3aae3d9b007 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 03:26:42 embed-certs-480663 crio[723]: time="2024-01-16 03:26:42.273875814Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705375602273862407,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=b18d20bb-d66a-4e2c-b1bd-e3aae3d9b007 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 03:26:42 embed-certs-480663 crio[723]: time="2024-01-16 03:26:42.274615270Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=5438c4fa-c938-43f9-a5e9-407e55adee0e name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 03:26:42 embed-certs-480663 crio[723]: time="2024-01-16 03:26:42.274697234Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=5438c4fa-c938-43f9-a5e9-407e55adee0e name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 03:26:42 embed-certs-480663 crio[723]: time="2024-01-16 03:26:42.274908723Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0f37f0f7c73398c72353bd7fa20c656f6c90dffa7c4d73d01f9c6d8804319657,PodSandboxId:5e742497bde0f88f37c8d63edf849ebec1984a658dfa6a132d1242bbf84a0acb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705374826755989756,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da59ff59-869f-48a9-a5c5-c95bb807cbcf,},Annotations:map[string]string{io.kubernetes.container.hash: 427d56c5,io.kubernetes.container.restartCount: 2,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5f11b59c21e49de34360bf58b39d8139d2062e46b02a2d693f3ea0fd10fd13b,PodSandboxId:e1e3ebeead958f5c82e6e4fdf744b4ed10ab4573850eb65ecd70bb6ce8ead286,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1705374804591127540,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b7ad1e22-9448-44d8-aee0-5170d264d3f6,},Annotations:map[string]string{io.kubernetes.container.hash: 9a679b09,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2cc211416aab65f634f78efecbcf662c33e971f6b6f1e0fb77492c1ef8e2cf70,PodSandboxId:a4bbad6c2b2c69570905e1037daed24395fadf12cc9c80716ecda8f08fc1e5b9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1705374803134072660,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-stqh5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: adbcef96-218b-42ed-9daf-72c274be0690,},Annotations:map[string]string{io.kubernetes.container.hash: e0eac2e6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"}
,{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:653a87cc5b4e5bb52f728e222fc7a58f19453b0263453d6c259e81a206ffac76,PodSandboxId:5e742497bde0f88f37c8d63edf849ebec1984a658dfa6a132d1242bbf84a0acb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1705374795550747304,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system
,io.kubernetes.pod.uid: da59ff59-869f-48a9-a5c5-c95bb807cbcf,},Annotations:map[string]string{io.kubernetes.container.hash: 427d56c5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da3ca3a9cda0a5d7ff284cd8e9e069e0ef17c913570e52561f8c7cb8be285047,PodSandboxId:9ff79d096f4801dbce33572a413be54f3c99636c38dd2f99bdf17af35cd89ac9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1705374795459731629,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j4786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aabb98a7-fe
55-4105-a5d2-c1e312464107,},Annotations:map[string]string{io.kubernetes.container.hash: aa3b13c8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab456031061355e2ec86ee36ff52b4245b5ccd2fd1f87da0318e9bbd4ca512e8,PodSandboxId:29324fa0b0c09f9749f68778cd4da177bb3207dad8e538cb88511f7b754b8a8d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1705374789321061954,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-480663,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ab7aa1bd8c13dd11
2e2029a6666906b,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36288d0c42d12139e9355a5d562d600f6065e59336e28b57558b0bf5ea3f0618,PodSandboxId:dad887fa7ce4c4d730ad4ad07900ef244bee16de0c9abab8cc98d3809bc84130,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1705374788776726935,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-480663,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b4972d069ffd7fa5b114fbba2bb2c59,},Annotations:map[string]string{io
.kubernetes.container.hash: bc32c30a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42d452ff0268fa05f5c5b20a7332caca85b7aea961642568ec84158f105b568f,PodSandboxId:9c0eabefa8e5bad7f1757d12ef76455a94e8e5a7603524e869b455b5b746f748,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1705374788574902731,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-480663,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70760e2d8cd956052088e57297a3e675,},Annotations:map[string]string{io.kubernete
s.container.hash: 9057951a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f75f023773154fd3722c861e7494aad7dd9f361e17a059735fef935507a94994,PodSandboxId:237d387addb2c96c71300d833fdc618b3cf0f45ce5dbc34e4a9f5aab7ca9cca0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1705374788509555694,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-480663,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5583cf18e2fbf50799cf889aa6297f90,},Annotations:map[
string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=5438c4fa-c938-43f9-a5e9-407e55adee0e name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 03:26:42 embed-certs-480663 crio[723]: time="2024-01-16 03:26:42.316919646Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=5d6ec480-f4f4-4fa5-bf59-e4496c2e79b3 name=/runtime.v1.RuntimeService/Version
	Jan 16 03:26:42 embed-certs-480663 crio[723]: time="2024-01-16 03:26:42.317019530Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=5d6ec480-f4f4-4fa5-bf59-e4496c2e79b3 name=/runtime.v1.RuntimeService/Version
	Jan 16 03:26:42 embed-certs-480663 crio[723]: time="2024-01-16 03:26:42.319717769Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=676f3a41-f7d9-4cee-83a7-995e6ff0c344 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 03:26:42 embed-certs-480663 crio[723]: time="2024-01-16 03:26:42.320565182Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705375602320446125,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=676f3a41-f7d9-4cee-83a7-995e6ff0c344 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 03:26:42 embed-certs-480663 crio[723]: time="2024-01-16 03:26:42.322566227Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=1ebe5aeb-1eeb-4e35-ab35-49d58b96226d name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 03:26:42 embed-certs-480663 crio[723]: time="2024-01-16 03:26:42.322642449Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=1ebe5aeb-1eeb-4e35-ab35-49d58b96226d name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 03:26:42 embed-certs-480663 crio[723]: time="2024-01-16 03:26:42.324287440Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0f37f0f7c73398c72353bd7fa20c656f6c90dffa7c4d73d01f9c6d8804319657,PodSandboxId:5e742497bde0f88f37c8d63edf849ebec1984a658dfa6a132d1242bbf84a0acb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705374826755989756,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da59ff59-869f-48a9-a5c5-c95bb807cbcf,},Annotations:map[string]string{io.kubernetes.container.hash: 427d56c5,io.kubernetes.container.restartCount: 2,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5f11b59c21e49de34360bf58b39d8139d2062e46b02a2d693f3ea0fd10fd13b,PodSandboxId:e1e3ebeead958f5c82e6e4fdf744b4ed10ab4573850eb65ecd70bb6ce8ead286,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1705374804591127540,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b7ad1e22-9448-44d8-aee0-5170d264d3f6,},Annotations:map[string]string{io.kubernetes.container.hash: 9a679b09,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2cc211416aab65f634f78efecbcf662c33e971f6b6f1e0fb77492c1ef8e2cf70,PodSandboxId:a4bbad6c2b2c69570905e1037daed24395fadf12cc9c80716ecda8f08fc1e5b9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1705374803134072660,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-stqh5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: adbcef96-218b-42ed-9daf-72c274be0690,},Annotations:map[string]string{io.kubernetes.container.hash: e0eac2e6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"}
,{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:653a87cc5b4e5bb52f728e222fc7a58f19453b0263453d6c259e81a206ffac76,PodSandboxId:5e742497bde0f88f37c8d63edf849ebec1984a658dfa6a132d1242bbf84a0acb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1705374795550747304,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system
,io.kubernetes.pod.uid: da59ff59-869f-48a9-a5c5-c95bb807cbcf,},Annotations:map[string]string{io.kubernetes.container.hash: 427d56c5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da3ca3a9cda0a5d7ff284cd8e9e069e0ef17c913570e52561f8c7cb8be285047,PodSandboxId:9ff79d096f4801dbce33572a413be54f3c99636c38dd2f99bdf17af35cd89ac9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1705374795459731629,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j4786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aabb98a7-fe
55-4105-a5d2-c1e312464107,},Annotations:map[string]string{io.kubernetes.container.hash: aa3b13c8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab456031061355e2ec86ee36ff52b4245b5ccd2fd1f87da0318e9bbd4ca512e8,PodSandboxId:29324fa0b0c09f9749f68778cd4da177bb3207dad8e538cb88511f7b754b8a8d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1705374789321061954,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-480663,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ab7aa1bd8c13dd11
2e2029a6666906b,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36288d0c42d12139e9355a5d562d600f6065e59336e28b57558b0bf5ea3f0618,PodSandboxId:dad887fa7ce4c4d730ad4ad07900ef244bee16de0c9abab8cc98d3809bc84130,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1705374788776726935,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-480663,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b4972d069ffd7fa5b114fbba2bb2c59,},Annotations:map[string]string{io
.kubernetes.container.hash: bc32c30a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42d452ff0268fa05f5c5b20a7332caca85b7aea961642568ec84158f105b568f,PodSandboxId:9c0eabefa8e5bad7f1757d12ef76455a94e8e5a7603524e869b455b5b746f748,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1705374788574902731,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-480663,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70760e2d8cd956052088e57297a3e675,},Annotations:map[string]string{io.kubernete
s.container.hash: 9057951a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f75f023773154fd3722c861e7494aad7dd9f361e17a059735fef935507a94994,PodSandboxId:237d387addb2c96c71300d833fdc618b3cf0f45ce5dbc34e4a9f5aab7ca9cca0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1705374788509555694,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-480663,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5583cf18e2fbf50799cf889aa6297f90,},Annotations:map[
string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=1ebe5aeb-1eeb-4e35-ab35-49d58b96226d name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 03:26:42 embed-certs-480663 crio[723]: time="2024-01-16 03:26:42.361288133Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=0000af33-28af-47a0-9e33-02370dc9ffa3 name=/runtime.v1.RuntimeService/Version
	Jan 16 03:26:42 embed-certs-480663 crio[723]: time="2024-01-16 03:26:42.361375426Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=0000af33-28af-47a0-9e33-02370dc9ffa3 name=/runtime.v1.RuntimeService/Version
	Jan 16 03:26:42 embed-certs-480663 crio[723]: time="2024-01-16 03:26:42.363499477Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=58b1eb1a-3be0-46f1-a2e0-c9e42f037263 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 03:26:42 embed-certs-480663 crio[723]: time="2024-01-16 03:26:42.363886415Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705375602363871463,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=58b1eb1a-3be0-46f1-a2e0-c9e42f037263 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 03:26:42 embed-certs-480663 crio[723]: time="2024-01-16 03:26:42.364570073Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=1eeb51c6-0c54-4934-be41-e37a29bbef46 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 03:26:42 embed-certs-480663 crio[723]: time="2024-01-16 03:26:42.364643025Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=1eeb51c6-0c54-4934-be41-e37a29bbef46 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 03:26:42 embed-certs-480663 crio[723]: time="2024-01-16 03:26:42.364838112Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0f37f0f7c73398c72353bd7fa20c656f6c90dffa7c4d73d01f9c6d8804319657,PodSandboxId:5e742497bde0f88f37c8d63edf849ebec1984a658dfa6a132d1242bbf84a0acb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705374826755989756,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da59ff59-869f-48a9-a5c5-c95bb807cbcf,},Annotations:map[string]string{io.kubernetes.container.hash: 427d56c5,io.kubernetes.container.restartCount: 2,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5f11b59c21e49de34360bf58b39d8139d2062e46b02a2d693f3ea0fd10fd13b,PodSandboxId:e1e3ebeead958f5c82e6e4fdf744b4ed10ab4573850eb65ecd70bb6ce8ead286,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1705374804591127540,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b7ad1e22-9448-44d8-aee0-5170d264d3f6,},Annotations:map[string]string{io.kubernetes.container.hash: 9a679b09,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2cc211416aab65f634f78efecbcf662c33e971f6b6f1e0fb77492c1ef8e2cf70,PodSandboxId:a4bbad6c2b2c69570905e1037daed24395fadf12cc9c80716ecda8f08fc1e5b9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1705374803134072660,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-stqh5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: adbcef96-218b-42ed-9daf-72c274be0690,},Annotations:map[string]string{io.kubernetes.container.hash: e0eac2e6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"}
,{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:653a87cc5b4e5bb52f728e222fc7a58f19453b0263453d6c259e81a206ffac76,PodSandboxId:5e742497bde0f88f37c8d63edf849ebec1984a658dfa6a132d1242bbf84a0acb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1705374795550747304,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system
,io.kubernetes.pod.uid: da59ff59-869f-48a9-a5c5-c95bb807cbcf,},Annotations:map[string]string{io.kubernetes.container.hash: 427d56c5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da3ca3a9cda0a5d7ff284cd8e9e069e0ef17c913570e52561f8c7cb8be285047,PodSandboxId:9ff79d096f4801dbce33572a413be54f3c99636c38dd2f99bdf17af35cd89ac9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1705374795459731629,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j4786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aabb98a7-fe
55-4105-a5d2-c1e312464107,},Annotations:map[string]string{io.kubernetes.container.hash: aa3b13c8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab456031061355e2ec86ee36ff52b4245b5ccd2fd1f87da0318e9bbd4ca512e8,PodSandboxId:29324fa0b0c09f9749f68778cd4da177bb3207dad8e538cb88511f7b754b8a8d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1705374789321061954,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-480663,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ab7aa1bd8c13dd11
2e2029a6666906b,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36288d0c42d12139e9355a5d562d600f6065e59336e28b57558b0bf5ea3f0618,PodSandboxId:dad887fa7ce4c4d730ad4ad07900ef244bee16de0c9abab8cc98d3809bc84130,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1705374788776726935,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-480663,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b4972d069ffd7fa5b114fbba2bb2c59,},Annotations:map[string]string{io
.kubernetes.container.hash: bc32c30a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42d452ff0268fa05f5c5b20a7332caca85b7aea961642568ec84158f105b568f,PodSandboxId:9c0eabefa8e5bad7f1757d12ef76455a94e8e5a7603524e869b455b5b746f748,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1705374788574902731,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-480663,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70760e2d8cd956052088e57297a3e675,},Annotations:map[string]string{io.kubernete
s.container.hash: 9057951a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f75f023773154fd3722c861e7494aad7dd9f361e17a059735fef935507a94994,PodSandboxId:237d387addb2c96c71300d833fdc618b3cf0f45ce5dbc34e4a9f5aab7ca9cca0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1705374788509555694,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-480663,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5583cf18e2fbf50799cf889aa6297f90,},Annotations:map[
string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=1eeb51c6-0c54-4934-be41-e37a29bbef46 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	0f37f0f7c7339       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 minutes ago      Running             storage-provisioner       2                   5e742497bde0f       storage-provisioner
	f5f11b59c21e4       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   e1e3ebeead958       busybox
	2cc211416aab6       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      13 minutes ago      Running             coredns                   1                   a4bbad6c2b2c6       coredns-5dd5756b68-stqh5
	653a87cc5b4e5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       1                   5e742497bde0f       storage-provisioner
	da3ca3a9cda0a       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      13 minutes ago      Running             kube-proxy                1                   9ff79d096f480       kube-proxy-j4786
	ab45603106135       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      13 minutes ago      Running             kube-scheduler            1                   29324fa0b0c09       kube-scheduler-embed-certs-480663
	36288d0c42d12       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      13 minutes ago      Running             etcd                      1                   dad887fa7ce4c       etcd-embed-certs-480663
	42d452ff0268f       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      13 minutes ago      Running             kube-apiserver            1                   9c0eabefa8e5b       kube-apiserver-embed-certs-480663
	f75f023773154       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      13 minutes ago      Running             kube-controller-manager   1                   237d387addb2c       kube-controller-manager-embed-certs-480663
	
	
	==> coredns [2cc211416aab65f634f78efecbcf662c33e971f6b6f1e0fb77492c1ef8e2cf70] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:55407 - 52631 "HINFO IN 341389483529151724.1810516983307257500. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.010011316s
	
	
	==> describe nodes <==
	Name:               embed-certs-480663
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-480663
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=880ec6d67bbc7fe8882aaa6543d5a07427c973fd
	                    minikube.k8s.io/name=embed-certs-480663
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_16T03_04_54_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Jan 2024 03:04:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-480663
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Jan 2024 03:26:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Jan 2024 03:23:57 +0000   Tue, 16 Jan 2024 03:04:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Jan 2024 03:23:57 +0000   Tue, 16 Jan 2024 03:04:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Jan 2024 03:23:57 +0000   Tue, 16 Jan 2024 03:04:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Jan 2024 03:23:57 +0000   Tue, 16 Jan 2024 03:13:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.150
	  Hostname:    embed-certs-480663
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 907673c0529d4fe7bddee1a62166d776
	  System UUID:                907673c0-529d-4fe7-bdde-e1a62166d776
	  Boot ID:                    ffa04338-2d5a-4308-af70-f8f39809837f
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 coredns-5dd5756b68-stqh5                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     21m
	  kube-system                 etcd-embed-certs-480663                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         21m
	  kube-system                 kube-apiserver-embed-certs-480663             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-controller-manager-embed-certs-480663    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-proxy-j4786                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-scheduler-embed-certs-480663             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 metrics-server-57f55c9bc5-7d2fh               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         21m
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 21m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  21m (x8 over 21m)  kubelet          Node embed-certs-480663 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m (x8 over 21m)  kubelet          Node embed-certs-480663 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21m (x7 over 21m)  kubelet          Node embed-certs-480663 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     21m                kubelet          Node embed-certs-480663 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    21m                kubelet          Node embed-certs-480663 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                21m                kubelet          Node embed-certs-480663 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  21m                kubelet          Node embed-certs-480663 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           21m                node-controller  Node embed-certs-480663 event: Registered Node embed-certs-480663 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node embed-certs-480663 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node embed-certs-480663 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node embed-certs-480663 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node embed-certs-480663 event: Registered Node embed-certs-480663 in Controller
	
	
	==> dmesg <==
	[Jan16 03:12] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.069299] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.402616] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.433360] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.153755] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000025] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.489527] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.501555] systemd-fstab-generator[649]: Ignoring "noauto" for root device
	[  +0.120462] systemd-fstab-generator[660]: Ignoring "noauto" for root device
	[  +0.139757] systemd-fstab-generator[673]: Ignoring "noauto" for root device
	[  +0.139498] systemd-fstab-generator[684]: Ignoring "noauto" for root device
	[  +0.235945] systemd-fstab-generator[708]: Ignoring "noauto" for root device
	[Jan16 03:13] systemd-fstab-generator[924]: Ignoring "noauto" for root device
	[ +15.343598] kauditd_printk_skb: 19 callbacks suppressed
	
	
	==> etcd [36288d0c42d12139e9355a5d562d600f6065e59336e28b57558b0bf5ea3f0618] <==
	{"level":"info","ts":"2024-01-16T03:13:17.865434Z","caller":"traceutil/trace.go:171","msg":"trace[1890352388] range","detail":"{range_begin:/registry/minions/embed-certs-480663; range_end:; response_count:1; response_revision:545; }","duration":"127.186903ms","start":"2024-01-16T03:13:17.738229Z","end":"2024-01-16T03:13:17.865416Z","steps":["trace[1890352388] 'agreement among raft nodes before linearized reading'  (duration: 126.645346ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-16T03:13:18.580693Z","caller":"traceutil/trace.go:171","msg":"trace[1123786873] linearizableReadLoop","detail":"{readStateIndex:579; appliedIndex:578; }","duration":"444.948706ms","start":"2024-01-16T03:13:18.135724Z","end":"2024-01-16T03:13:18.580673Z","steps":["trace[1123786873] 'read index received'  (duration: 442.141396ms)","trace[1123786873] 'applied index is now lower than readState.Index'  (duration: 2.806049ms)"],"step_count":2}
	{"level":"info","ts":"2024-01-16T03:13:18.580828Z","caller":"traceutil/trace.go:171","msg":"trace[1217530773] transaction","detail":"{read_only:false; response_revision:546; number_of_response:1; }","duration":"705.379675ms","start":"2024-01-16T03:13:17.875439Z","end":"2024-01-16T03:13:18.580819Z","steps":["trace[1217530773] 'process raft request'  (duration: 702.477744ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-16T03:13:18.580909Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-16T03:13:17.8754Z","time spent":"705.451341ms","remote":"127.0.0.1:56658","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":865,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/events/default/busybox.17aab557112c7d2b\" mod_revision:530 > success:<request_put:<key:\"/registry/events/default/busybox.17aab557112c7d2b\" value_size:798 lease:293170551725453430 >> failure:<request_range:<key:\"/registry/events/default/busybox.17aab557112c7d2b\" > >"}
	{"level":"warn","ts":"2024-01-16T03:13:18.581126Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"445.418522ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/coredns\" ","response":"range_response_count:1 size:4012"}
	{"level":"info","ts":"2024-01-16T03:13:18.581232Z","caller":"traceutil/trace.go:171","msg":"trace[1826876207] range","detail":"{range_begin:/registry/deployments/kube-system/coredns; range_end:; response_count:1; response_revision:546; }","duration":"445.446095ms","start":"2024-01-16T03:13:18.135698Z","end":"2024-01-16T03:13:18.581145Z","steps":["trace[1826876207] 'agreement among raft nodes before linearized reading'  (duration: 445.329268ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-16T03:13:18.581267Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-16T03:13:18.135681Z","time spent":"445.57637ms","remote":"127.0.0.1:56742","response type":"/etcdserverpb.KV/Range","request count":0,"request size":43,"response count":1,"response size":4036,"request content":"key:\"/registry/deployments/kube-system/coredns\" "}
	{"level":"warn","ts":"2024-01-16T03:13:18.581417Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"217.141911ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/configmaps/kube-system/coredns\" ","response":"range_response_count:1 size:764"}
	{"level":"info","ts":"2024-01-16T03:13:18.581437Z","caller":"traceutil/trace.go:171","msg":"trace[407147067] range","detail":"{range_begin:/registry/configmaps/kube-system/coredns; range_end:; response_count:1; response_revision:546; }","duration":"217.161197ms","start":"2024-01-16T03:13:18.364269Z","end":"2024-01-16T03:13:18.58143Z","steps":["trace[407147067] 'agreement among raft nodes before linearized reading'  (duration: 217.117653ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-16T03:13:18.722425Z","caller":"traceutil/trace.go:171","msg":"trace[1625719483] transaction","detail":"{read_only:false; response_revision:547; number_of_response:1; }","duration":"132.715386ms","start":"2024-01-16T03:13:18.589688Z","end":"2024-01-16T03:13:18.722403Z","steps":["trace[1625719483] 'process raft request'  (duration: 115.031256ms)","trace[1625719483] 'compare'  (duration: 17.560917ms)"],"step_count":2}
	{"level":"info","ts":"2024-01-16T03:13:18.878708Z","caller":"traceutil/trace.go:171","msg":"trace[1566635712] linearizableReadLoop","detail":"{readStateIndex:581; appliedIndex:580; }","duration":"129.342435ms","start":"2024-01-16T03:13:18.74935Z","end":"2024-01-16T03:13:18.878692Z","steps":["trace[1566635712] 'read index received'  (duration: 107.01544ms)","trace[1566635712] 'applied index is now lower than readState.Index'  (duration: 22.326449ms)"],"step_count":2}
	{"level":"warn","ts":"2024-01-16T03:13:18.87887Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"129.521615ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/embed-certs-480663\" ","response":"range_response_count:1 size:5664"}
	{"level":"info","ts":"2024-01-16T03:13:18.878893Z","caller":"traceutil/trace.go:171","msg":"trace[1686190376] range","detail":"{range_begin:/registry/minions/embed-certs-480663; range_end:; response_count:1; response_revision:548; }","duration":"129.560168ms","start":"2024-01-16T03:13:18.749325Z","end":"2024-01-16T03:13:18.878886Z","steps":["trace[1686190376] 'agreement among raft nodes before linearized reading'  (duration: 129.457787ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-16T03:13:18.879491Z","caller":"traceutil/trace.go:171","msg":"trace[1610609892] transaction","detail":"{read_only:false; response_revision:548; number_of_response:1; }","duration":"148.155244ms","start":"2024-01-16T03:13:18.731319Z","end":"2024-01-16T03:13:18.879474Z","steps":["trace[1610609892] 'process raft request'  (duration: 125.173932ms)","trace[1610609892] 'compare'  (duration: 22.1172ms)"],"step_count":2}
	{"level":"info","ts":"2024-01-16T03:14:04.534781Z","caller":"traceutil/trace.go:171","msg":"trace[771273906] transaction","detail":"{read_only:false; response_revision:610; number_of_response:1; }","duration":"153.761543ms","start":"2024-01-16T03:14:04.380984Z","end":"2024-01-16T03:14:04.534746Z","steps":["trace[771273906] 'process raft request'  (duration: 117.492611ms)","trace[771273906] 'compare'  (duration: 36.031757ms)"],"step_count":2}
	{"level":"warn","ts":"2024-01-16T03:14:04.535093Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"145.446934ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1118"}
	{"level":"info","ts":"2024-01-16T03:14:04.535575Z","caller":"traceutil/trace.go:171","msg":"trace[48173144] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:610; }","duration":"146.089403ms","start":"2024-01-16T03:14:04.38947Z","end":"2024-01-16T03:14:04.535559Z","steps":["trace[48173144] 'agreement among raft nodes before linearized reading'  (duration: 145.36668ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-16T03:14:04.534771Z","caller":"traceutil/trace.go:171","msg":"trace[1414106003] linearizableReadLoop","detail":"{readStateIndex:653; appliedIndex:652; }","duration":"145.234081ms","start":"2024-01-16T03:14:04.389501Z","end":"2024-01-16T03:14:04.534735Z","steps":["trace[1414106003] 'read index received'  (duration: 108.92355ms)","trace[1414106003] 'applied index is now lower than readState.Index'  (duration: 36.309739ms)"],"step_count":2}
	{"level":"info","ts":"2024-01-16T03:14:05.426288Z","caller":"traceutil/trace.go:171","msg":"trace[1112784573] linearizableReadLoop","detail":"{readStateIndex:654; appliedIndex:653; }","duration":"209.581136ms","start":"2024-01-16T03:14:05.21669Z","end":"2024-01-16T03:14:05.426272Z","steps":["trace[1112784573] 'read index received'  (duration: 209.30074ms)","trace[1112784573] 'applied index is now lower than readState.Index'  (duration: 279.639µs)"],"step_count":2}
	{"level":"info","ts":"2024-01-16T03:14:05.426736Z","caller":"traceutil/trace.go:171","msg":"trace[784698471] transaction","detail":"{read_only:false; response_revision:611; number_of_response:1; }","duration":"227.949099ms","start":"2024-01-16T03:14:05.198772Z","end":"2024-01-16T03:14:05.426722Z","steps":["trace[784698471] 'process raft request'  (duration: 227.260693ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-16T03:14:05.426853Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"210.167756ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-01-16T03:14:05.427706Z","caller":"traceutil/trace.go:171","msg":"trace[1316095772] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:611; }","duration":"211.032004ms","start":"2024-01-16T03:14:05.216662Z","end":"2024-01-16T03:14:05.427694Z","steps":["trace[1316095772] 'agreement among raft nodes before linearized reading'  (duration: 210.146643ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-16T03:23:12.356837Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":825}
	{"level":"info","ts":"2024-01-16T03:23:12.359392Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":825,"took":"2.279253ms","hash":3028259112}
	{"level":"info","ts":"2024-01-16T03:23:12.35946Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3028259112,"revision":825,"compact-revision":-1}
	
	
	==> kernel <==
	 03:26:42 up 14 min,  0 users,  load average: 0.62, 0.34, 0.22
	Linux embed-certs-480663 5.10.57 #1 SMP Thu Dec 28 22:04:21 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [42d452ff0268fa05f5c5b20a7332caca85b7aea961642568ec84158f105b568f] <==
	I0116 03:23:14.243381       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0116 03:23:15.243648       1 handler_proxy.go:93] no RequestInfo found in the context
	E0116 03:23:15.243805       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0116 03:23:15.243841       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0116 03:23:15.244025       1 handler_proxy.go:93] no RequestInfo found in the context
	E0116 03:23:15.244086       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0116 03:23:15.245423       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0116 03:24:14.100749       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0116 03:24:15.244744       1 handler_proxy.go:93] no RequestInfo found in the context
	E0116 03:24:15.244939       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0116 03:24:15.244975       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0116 03:24:15.246282       1 handler_proxy.go:93] no RequestInfo found in the context
	E0116 03:24:15.246361       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0116 03:24:15.246388       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0116 03:25:14.100876       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0116 03:26:14.101082       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0116 03:26:15.245643       1 handler_proxy.go:93] no RequestInfo found in the context
	E0116 03:26:15.245858       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0116 03:26:15.245930       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0116 03:26:15.246969       1 handler_proxy.go:93] no RequestInfo found in the context
	E0116 03:26:15.247000       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0116 03:26:15.247008       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [f75f023773154fd3722c861e7494aad7dd9f361e17a059735fef935507a94994] <==
	I0116 03:20:57.401141       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0116 03:21:26.894943       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0116 03:21:27.411121       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0116 03:21:56.901768       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0116 03:21:57.419569       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0116 03:22:26.909031       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0116 03:22:27.428864       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0116 03:22:56.915006       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0116 03:22:57.443131       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0116 03:23:26.921605       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0116 03:23:27.454089       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0116 03:23:56.927620       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0116 03:23:57.464243       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0116 03:24:26.933462       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0116 03:24:27.473145       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0116 03:24:31.526041       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="494.827µs"
	I0116 03:24:45.536255       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="217.654µs"
	E0116 03:24:56.940686       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0116 03:24:57.483039       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0116 03:25:26.949971       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0116 03:25:27.493957       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0116 03:25:56.956077       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0116 03:25:57.508314       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0116 03:26:26.962958       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0116 03:26:27.519537       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [da3ca3a9cda0a5d7ff284cd8e9e069e0ef17c913570e52561f8c7cb8be285047] <==
	I0116 03:13:15.782754       1 server_others.go:69] "Using iptables proxy"
	I0116 03:13:15.798951       1 node.go:141] Successfully retrieved node IP: 192.168.61.150
	I0116 03:13:15.858503       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0116 03:13:15.858593       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0116 03:13:15.863888       1 server_others.go:152] "Using iptables Proxier"
	I0116 03:13:15.863954       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0116 03:13:15.864265       1 server.go:846] "Version info" version="v1.28.4"
	I0116 03:13:15.864301       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0116 03:13:15.865371       1 config.go:188] "Starting service config controller"
	I0116 03:13:15.865420       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0116 03:13:15.865444       1 config.go:97] "Starting endpoint slice config controller"
	I0116 03:13:15.865447       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0116 03:13:15.868385       1 config.go:315] "Starting node config controller"
	I0116 03:13:15.868530       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0116 03:13:15.966532       1 shared_informer.go:318] Caches are synced for service config
	I0116 03:13:15.968545       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0116 03:13:15.968991       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [ab456031061355e2ec86ee36ff52b4245b5ccd2fd1f87da0318e9bbd4ca512e8] <==
	I0116 03:13:11.229387       1 serving.go:348] Generated self-signed cert in-memory
	W0116 03:13:14.179745       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0116 03:13:14.179883       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0116 03:13:14.179938       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0116 03:13:14.179978       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0116 03:13:14.250888       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0116 03:13:14.251012       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0116 03:13:14.257604       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0116 03:13:14.257787       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0116 03:13:14.258930       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0116 03:13:14.259065       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0116 03:13:14.358269       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Tue 2024-01-16 03:12:41 UTC, ends at Tue 2024-01-16 03:26:42 UTC. --
	Jan 16 03:24:07 embed-certs-480663 kubelet[930]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 16 03:24:07 embed-certs-480663 kubelet[930]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 16 03:24:16 embed-certs-480663 kubelet[930]: E0116 03:24:16.520509     930 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jan 16 03:24:16 embed-certs-480663 kubelet[930]: E0116 03:24:16.520570     930 kuberuntime_image.go:53] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jan 16 03:24:16 embed-certs-480663 kubelet[930]: E0116 03:24:16.520789     930 kuberuntime_manager.go:1261] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-t6hp9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Pro
beHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:Fi
le,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-57f55c9bc5-7d2fh_kube-system(512cf579-f335-4995-8721-74bb84da776e): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jan 16 03:24:16 embed-certs-480663 kubelet[930]: E0116 03:24:16.520825     930 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-57f55c9bc5-7d2fh" podUID="512cf579-f335-4995-8721-74bb84da776e"
	Jan 16 03:24:31 embed-certs-480663 kubelet[930]: E0116 03:24:31.507546     930 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-7d2fh" podUID="512cf579-f335-4995-8721-74bb84da776e"
	Jan 16 03:24:45 embed-certs-480663 kubelet[930]: E0116 03:24:45.512229     930 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-7d2fh" podUID="512cf579-f335-4995-8721-74bb84da776e"
	Jan 16 03:24:56 embed-certs-480663 kubelet[930]: E0116 03:24:56.507465     930 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-7d2fh" podUID="512cf579-f335-4995-8721-74bb84da776e"
	Jan 16 03:25:07 embed-certs-480663 kubelet[930]: E0116 03:25:07.534765     930 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 16 03:25:07 embed-certs-480663 kubelet[930]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 16 03:25:07 embed-certs-480663 kubelet[930]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 16 03:25:07 embed-certs-480663 kubelet[930]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 16 03:25:09 embed-certs-480663 kubelet[930]: E0116 03:25:09.508262     930 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-7d2fh" podUID="512cf579-f335-4995-8721-74bb84da776e"
	Jan 16 03:25:20 embed-certs-480663 kubelet[930]: E0116 03:25:20.506792     930 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-7d2fh" podUID="512cf579-f335-4995-8721-74bb84da776e"
	Jan 16 03:25:32 embed-certs-480663 kubelet[930]: E0116 03:25:32.507406     930 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-7d2fh" podUID="512cf579-f335-4995-8721-74bb84da776e"
	Jan 16 03:25:46 embed-certs-480663 kubelet[930]: E0116 03:25:46.507428     930 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-7d2fh" podUID="512cf579-f335-4995-8721-74bb84da776e"
	Jan 16 03:25:58 embed-certs-480663 kubelet[930]: E0116 03:25:58.506602     930 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-7d2fh" podUID="512cf579-f335-4995-8721-74bb84da776e"
	Jan 16 03:26:07 embed-certs-480663 kubelet[930]: E0116 03:26:07.532482     930 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 16 03:26:07 embed-certs-480663 kubelet[930]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 16 03:26:07 embed-certs-480663 kubelet[930]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 16 03:26:07 embed-certs-480663 kubelet[930]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 16 03:26:13 embed-certs-480663 kubelet[930]: E0116 03:26:13.507362     930 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-7d2fh" podUID="512cf579-f335-4995-8721-74bb84da776e"
	Jan 16 03:26:24 embed-certs-480663 kubelet[930]: E0116 03:26:24.506718     930 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-7d2fh" podUID="512cf579-f335-4995-8721-74bb84da776e"
	Jan 16 03:26:38 embed-certs-480663 kubelet[930]: E0116 03:26:38.506765     930 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-7d2fh" podUID="512cf579-f335-4995-8721-74bb84da776e"
	
	
	==> storage-provisioner [0f37f0f7c73398c72353bd7fa20c656f6c90dffa7c4d73d01f9c6d8804319657] <==
	I0116 03:13:46.907858       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0116 03:13:46.922741       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0116 03:13:46.922839       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0116 03:14:04.372489       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0116 03:14:04.373472       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-480663_02385622-396e-4f2a-a1a7-96b11526d536!
	I0116 03:14:04.381289       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"74d40386-f551-4067-ae35-b700d12b05b3", APIVersion:"v1", ResourceVersion:"609", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-480663_02385622-396e-4f2a-a1a7-96b11526d536 became leader
	I0116 03:14:04.474327       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-480663_02385622-396e-4f2a-a1a7-96b11526d536!
	
	
	==> storage-provisioner [653a87cc5b4e5bb52f728e222fc7a58f19453b0263453d6c259e81a206ffac76] <==
	I0116 03:13:15.748004       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0116 03:13:45.750999       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-480663 -n embed-certs-480663
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-480663 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-7d2fh
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-480663 describe pod metrics-server-57f55c9bc5-7d2fh
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-480663 describe pod metrics-server-57f55c9bc5-7d2fh: exit status 1 (74.212562ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-7d2fh" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-480663 describe pod metrics-server-57f55c9bc5-7d2fh: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (543.70s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.63s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0116 03:22:27.513562  978482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/functional-941139/client.crt: no such file or directory
E0116 03:23:12.496109  978482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/addons-321835/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-788237 -n old-k8s-version-788237
start_stop_delete_test.go:274: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-01-16 03:29:06.077316471 +0000 UTC m=+5323.961143042
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-788237 -n old-k8s-version-788237
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-788237 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-788237 logs -n 25: (1.870961848s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p cert-expiration-920153                              | cert-expiration-920153       | jenkins | v1.32.0 | 16 Jan 24 03:03 UTC | 16 Jan 24 03:03 UTC |
	| start   | -p embed-certs-480663                                  | embed-certs-480663           | jenkins | v1.32.0 | 16 Jan 24 03:03 UTC | 16 Jan 24 03:05 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| pause   | -p pause-100619                                        | pause-100619                 | jenkins | v1.32.0 | 16 Jan 24 03:03 UTC | 16 Jan 24 03:03 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| unpause | -p pause-100619                                        | pause-100619                 | jenkins | v1.32.0 | 16 Jan 24 03:03 UTC | 16 Jan 24 03:03 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| pause   | -p pause-100619                                        | pause-100619                 | jenkins | v1.32.0 | 16 Jan 24 03:03 UTC | 16 Jan 24 03:03 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| delete  | -p pause-100619                                        | pause-100619                 | jenkins | v1.32.0 | 16 Jan 24 03:03 UTC | 16 Jan 24 03:03 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| delete  | -p pause-100619                                        | pause-100619                 | jenkins | v1.32.0 | 16 Jan 24 03:03 UTC | 16 Jan 24 03:03 UTC |
	| delete  | -p                                                     | disable-driver-mounts-807979 | jenkins | v1.32.0 | 16 Jan 24 03:03 UTC | 16 Jan 24 03:03 UTC |
	|         | disable-driver-mounts-807979                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-775571 | jenkins | v1.32.0 | 16 Jan 24 03:03 UTC | 16 Jan 24 03:06 UTC |
	|         | default-k8s-diff-port-775571                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-934668             | no-preload-934668            | jenkins | v1.32.0 | 16 Jan 24 03:05 UTC | 16 Jan 24 03:05 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-934668                                   | no-preload-934668            | jenkins | v1.32.0 | 16 Jan 24 03:05 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-480663            | embed-certs-480663           | jenkins | v1.32.0 | 16 Jan 24 03:05 UTC | 16 Jan 24 03:05 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-480663                                  | embed-certs-480663           | jenkins | v1.32.0 | 16 Jan 24 03:05 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-788237        | old-k8s-version-788237       | jenkins | v1.32.0 | 16 Jan 24 03:05 UTC | 16 Jan 24 03:05 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-788237                              | old-k8s-version-788237       | jenkins | v1.32.0 | 16 Jan 24 03:05 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-775571  | default-k8s-diff-port-775571 | jenkins | v1.32.0 | 16 Jan 24 03:06 UTC | 16 Jan 24 03:06 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-775571 | jenkins | v1.32.0 | 16 Jan 24 03:06 UTC |                     |
	|         | default-k8s-diff-port-775571                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-934668                  | no-preload-934668            | jenkins | v1.32.0 | 16 Jan 24 03:07 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-480663                 | embed-certs-480663           | jenkins | v1.32.0 | 16 Jan 24 03:07 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-934668                                   | no-preload-934668            | jenkins | v1.32.0 | 16 Jan 24 03:07 UTC | 16 Jan 24 03:24 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| start   | -p embed-certs-480663                                  | embed-certs-480663           | jenkins | v1.32.0 | 16 Jan 24 03:07 UTC | 16 Jan 24 03:17 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-788237             | old-k8s-version-788237       | jenkins | v1.32.0 | 16 Jan 24 03:08 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-788237                              | old-k8s-version-788237       | jenkins | v1.32.0 | 16 Jan 24 03:08 UTC | 16 Jan 24 03:20 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-775571       | default-k8s-diff-port-775571 | jenkins | v1.32.0 | 16 Jan 24 03:08 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-775571 | jenkins | v1.32.0 | 16 Jan 24 03:08 UTC | 16 Jan 24 03:23 UTC |
	|         | default-k8s-diff-port-775571                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/16 03:08:55
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0116 03:08:55.523172 1011955 out.go:296] Setting OutFile to fd 1 ...
	I0116 03:08:55.523367 1011955 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 03:08:55.523379 1011955 out.go:309] Setting ErrFile to fd 2...
	I0116 03:08:55.523384 1011955 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 03:08:55.523559 1011955 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17967-971255/.minikube/bin
	I0116 03:08:55.524097 1011955 out.go:303] Setting JSON to false
	I0116 03:08:55.525108 1011955 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":13885,"bootTime":1705360651,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1048-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0116 03:08:55.525170 1011955 start.go:138] virtualization: kvm guest
	I0116 03:08:55.527591 1011955 out.go:177] * [default-k8s-diff-port-775571] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0116 03:08:55.529034 1011955 out.go:177]   - MINIKUBE_LOCATION=17967
	I0116 03:08:55.529110 1011955 notify.go:220] Checking for updates...
	I0116 03:08:55.530388 1011955 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0116 03:08:55.531787 1011955 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17967-971255/kubeconfig
	I0116 03:08:55.533364 1011955 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17967-971255/.minikube
	I0116 03:08:55.534716 1011955 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0116 03:08:55.535979 1011955 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0116 03:08:55.537715 1011955 config.go:182] Loaded profile config "default-k8s-diff-port-775571": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 03:08:55.538436 1011955 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:08:55.538496 1011955 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:08:55.553180 1011955 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44155
	I0116 03:08:55.553640 1011955 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:08:55.554204 1011955 main.go:141] libmachine: Using API Version  1
	I0116 03:08:55.554227 1011955 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:08:55.554581 1011955 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:08:55.554799 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .DriverName
	I0116 03:08:55.555037 1011955 driver.go:392] Setting default libvirt URI to qemu:///system
	I0116 03:08:55.555380 1011955 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:08:55.555442 1011955 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:08:55.570254 1011955 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46641
	I0116 03:08:55.570682 1011955 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:08:55.571208 1011955 main.go:141] libmachine: Using API Version  1
	I0116 03:08:55.571235 1011955 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:08:55.571622 1011955 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:08:55.571835 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .DriverName
	I0116 03:08:55.608921 1011955 out.go:177] * Using the kvm2 driver based on existing profile
	I0116 03:08:55.610466 1011955 start.go:298] selected driver: kvm2
	I0116 03:08:55.610482 1011955 start.go:902] validating driver "kvm2" against &{Name:default-k8s-diff-port-775571 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuber
netesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-775571 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.72.158 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRe
quested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 03:08:55.610637 1011955 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0116 03:08:55.611416 1011955 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0116 03:08:55.611501 1011955 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17967-971255/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0116 03:08:55.627062 1011955 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0116 03:08:55.627489 1011955 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0116 03:08:55.627568 1011955 cni.go:84] Creating CNI manager for ""
	I0116 03:08:55.627585 1011955 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 03:08:55.627598 1011955 start_flags.go:321] config:
	{Name:default-k8s-diff-port-775571 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-77557
1 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.72.158 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 03:08:55.627820 1011955 iso.go:125] acquiring lock: {Name:mkc0ce0b4b435c5fb7570521b794e5982bb7bf6d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0116 03:08:55.630054 1011955 out.go:177] * Starting control plane node default-k8s-diff-port-775571 in cluster default-k8s-diff-port-775571
	I0116 03:08:56.294081 1011460 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.29:22: connect: no route to host
	I0116 03:08:55.631888 1011955 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0116 03:08:55.631938 1011955 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17967-971255/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0116 03:08:55.631953 1011955 cache.go:56] Caching tarball of preloaded images
	I0116 03:08:55.632083 1011955 preload.go:174] Found /home/jenkins/minikube-integration/17967-971255/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0116 03:08:55.632097 1011955 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0116 03:08:55.632257 1011955 profile.go:148] Saving config to /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/default-k8s-diff-port-775571/config.json ...
	I0116 03:08:55.632487 1011955 start.go:365] acquiring machines lock for default-k8s-diff-port-775571: {Name:mk61734672272adcae1bdaf20e72828e69d219db Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0116 03:08:59.366084 1011460 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.29:22: connect: no route to host
	I0116 03:09:05.446075 1011460 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.29:22: connect: no route to host
	I0116 03:09:08.518122 1011460 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.29:22: connect: no route to host
	I0116 03:09:14.598126 1011460 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.29:22: connect: no route to host
	I0116 03:09:17.670148 1011460 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.29:22: connect: no route to host
	I0116 03:09:23.750127 1011460 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.29:22: connect: no route to host
	I0116 03:09:26.822075 1011460 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.29:22: connect: no route to host
	I0116 03:09:32.902064 1011460 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.29:22: connect: no route to host
	I0116 03:09:35.974222 1011460 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.29:22: connect: no route to host
	I0116 03:09:42.054100 1011460 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.29:22: connect: no route to host
	I0116 03:09:45.126136 1011460 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.29:22: connect: no route to host
	I0116 03:09:51.206133 1011460 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.29:22: connect: no route to host
	I0116 03:09:54.278161 1011460 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.29:22: connect: no route to host
	I0116 03:10:00.358119 1011460 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.29:22: connect: no route to host
	I0116 03:10:03.430197 1011460 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.29:22: connect: no route to host
	I0116 03:10:09.510091 1011460 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.29:22: connect: no route to host
	I0116 03:10:12.582128 1011460 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.29:22: connect: no route to host
	I0116 03:10:18.662160 1011460 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.29:22: connect: no route to host
	I0116 03:10:21.734193 1011460 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.29:22: connect: no route to host
	I0116 03:10:27.814164 1011460 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.29:22: connect: no route to host
	I0116 03:10:30.886157 1011460 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.29:22: connect: no route to host
	I0116 03:10:36.966149 1011460 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.29:22: connect: no route to host
	I0116 03:10:40.038146 1011460 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.29:22: connect: no route to host
	I0116 03:10:46.118124 1011460 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.29:22: connect: no route to host
	I0116 03:10:49.190101 1011460 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.29:22: connect: no route to host
	I0116 03:10:55.269989 1011460 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.29:22: connect: no route to host
	I0116 03:10:58.342124 1011460 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.29:22: connect: no route to host
	I0116 03:11:04.422158 1011460 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.29:22: connect: no route to host
	I0116 03:11:07.494110 1011460 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.29:22: connect: no route to host
	I0116 03:11:13.574119 1011460 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.29:22: connect: no route to host
	I0116 03:11:16.646126 1011460 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.29:22: connect: no route to host
	I0116 03:11:22.726139 1011460 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.29:22: connect: no route to host
	I0116 03:11:25.798139 1011460 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.29:22: connect: no route to host
	I0116 03:11:31.878112 1011460 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.29:22: connect: no route to host
	I0116 03:11:34.950159 1011460 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.29:22: connect: no route to host
	I0116 03:11:41.030157 1011460 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.29:22: connect: no route to host
	I0116 03:11:44.102169 1011460 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.29:22: connect: no route to host
	I0116 03:11:50.182089 1011460 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.29:22: connect: no route to host
	I0116 03:11:53.254213 1011460 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.29:22: connect: no route to host
	I0116 03:11:59.334156 1011460 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.29:22: connect: no route to host
	I0116 03:12:02.406103 1011460 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.29:22: connect: no route to host
	I0116 03:12:08.486171 1011460 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.29:22: connect: no route to host
	I0116 03:12:11.558273 1011460 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.29:22: connect: no route to host
	I0116 03:12:17.638145 1011460 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.29:22: connect: no route to host
	I0116 03:12:20.710185 1011460 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.29:22: connect: no route to host
	I0116 03:12:26.790125 1011460 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.29:22: connect: no route to host
	I0116 03:12:29.794327 1011501 start.go:369] acquired machines lock for "embed-certs-480663" in 4m35.850983647s
	I0116 03:12:29.794418 1011501 start.go:96] Skipping create...Using existing machine configuration
	I0116 03:12:29.794429 1011501 fix.go:54] fixHost starting: 
	I0116 03:12:29.794787 1011501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:12:29.794827 1011501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:12:29.810363 1011501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40317
	I0116 03:12:29.810847 1011501 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:12:29.811350 1011501 main.go:141] libmachine: Using API Version  1
	I0116 03:12:29.811377 1011501 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:12:29.811743 1011501 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:12:29.811943 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .DriverName
	I0116 03:12:29.812098 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetState
	I0116 03:12:29.813836 1011501 fix.go:102] recreateIfNeeded on embed-certs-480663: state=Stopped err=<nil>
	I0116 03:12:29.813863 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .DriverName
	W0116 03:12:29.814085 1011501 fix.go:128] unexpected machine state, will restart: <nil>
	I0116 03:12:29.816073 1011501 out.go:177] * Restarting existing kvm2 VM for "embed-certs-480663" ...
	I0116 03:12:29.792154 1011460 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0116 03:12:29.792196 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetSSHHostname
	I0116 03:12:29.794110 1011460 machine.go:91] provisioned docker machine in 4m37.362238239s
	I0116 03:12:29.794181 1011460 fix.go:56] fixHost completed within 4m37.38762384s
	I0116 03:12:29.794190 1011460 start.go:83] releasing machines lock for "no-preload-934668", held for 4m37.387657639s
	W0116 03:12:29.794218 1011460 start.go:694] error starting host: provision: host is not running
	W0116 03:12:29.794363 1011460 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0116 03:12:29.794373 1011460 start.go:709] Will try again in 5 seconds ...
	I0116 03:12:29.817479 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .Start
	I0116 03:12:29.817644 1011501 main.go:141] libmachine: (embed-certs-480663) Ensuring networks are active...
	I0116 03:12:29.818499 1011501 main.go:141] libmachine: (embed-certs-480663) Ensuring network default is active
	I0116 03:12:29.818799 1011501 main.go:141] libmachine: (embed-certs-480663) Ensuring network mk-embed-certs-480663 is active
	I0116 03:12:29.819175 1011501 main.go:141] libmachine: (embed-certs-480663) Getting domain xml...
	I0116 03:12:29.819788 1011501 main.go:141] libmachine: (embed-certs-480663) Creating domain...
	I0116 03:12:31.021602 1011501 main.go:141] libmachine: (embed-certs-480663) Waiting to get IP...
	I0116 03:12:31.022948 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | domain embed-certs-480663 has defined MAC address 52:54:00:1c:0e:bd in network mk-embed-certs-480663
	I0116 03:12:31.023338 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | unable to find current IP address of domain embed-certs-480663 in network mk-embed-certs-480663
	I0116 03:12:31.023411 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | I0116 03:12:31.023303 1012490 retry.go:31] will retry after 276.789085ms: waiting for machine to come up
	I0116 03:12:31.301941 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | domain embed-certs-480663 has defined MAC address 52:54:00:1c:0e:bd in network mk-embed-certs-480663
	I0116 03:12:31.302463 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | unable to find current IP address of domain embed-certs-480663 in network mk-embed-certs-480663
	I0116 03:12:31.302500 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | I0116 03:12:31.302382 1012490 retry.go:31] will retry after 256.134625ms: waiting for machine to come up
	I0116 03:12:31.560002 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | domain embed-certs-480663 has defined MAC address 52:54:00:1c:0e:bd in network mk-embed-certs-480663
	I0116 03:12:31.560544 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | unable to find current IP address of domain embed-certs-480663 in network mk-embed-certs-480663
	I0116 03:12:31.560571 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | I0116 03:12:31.560490 1012490 retry.go:31] will retry after 439.008262ms: waiting for machine to come up
	I0116 03:12:32.001188 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | domain embed-certs-480663 has defined MAC address 52:54:00:1c:0e:bd in network mk-embed-certs-480663
	I0116 03:12:32.001642 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | unable to find current IP address of domain embed-certs-480663 in network mk-embed-certs-480663
	I0116 03:12:32.001679 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | I0116 03:12:32.001577 1012490 retry.go:31] will retry after 408.362832ms: waiting for machine to come up
	I0116 03:12:32.411058 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | domain embed-certs-480663 has defined MAC address 52:54:00:1c:0e:bd in network mk-embed-certs-480663
	I0116 03:12:32.411391 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | unable to find current IP address of domain embed-certs-480663 in network mk-embed-certs-480663
	I0116 03:12:32.411423 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | I0116 03:12:32.411337 1012490 retry.go:31] will retry after 734.236059ms: waiting for machine to come up
	I0116 03:12:33.146871 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | domain embed-certs-480663 has defined MAC address 52:54:00:1c:0e:bd in network mk-embed-certs-480663
	I0116 03:12:33.147227 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | unable to find current IP address of domain embed-certs-480663 in network mk-embed-certs-480663
	I0116 03:12:33.147255 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | I0116 03:12:33.147168 1012490 retry.go:31] will retry after 675.663635ms: waiting for machine to come up
	I0116 03:12:33.824145 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | domain embed-certs-480663 has defined MAC address 52:54:00:1c:0e:bd in network mk-embed-certs-480663
	I0116 03:12:33.824670 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | unable to find current IP address of domain embed-certs-480663 in network mk-embed-certs-480663
	I0116 03:12:33.824702 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | I0116 03:12:33.824595 1012490 retry.go:31] will retry after 759.820531ms: waiting for machine to come up
	I0116 03:12:34.796140 1011460 start.go:365] acquiring machines lock for no-preload-934668: {Name:mk61734672272adcae1bdaf20e72828e69d219db Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0116 03:12:34.585458 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | domain embed-certs-480663 has defined MAC address 52:54:00:1c:0e:bd in network mk-embed-certs-480663
	I0116 03:12:34.585893 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | unable to find current IP address of domain embed-certs-480663 in network mk-embed-certs-480663
	I0116 03:12:34.585919 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | I0116 03:12:34.585853 1012490 retry.go:31] will retry after 1.421527223s: waiting for machine to come up
	I0116 03:12:36.008778 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | domain embed-certs-480663 has defined MAC address 52:54:00:1c:0e:bd in network mk-embed-certs-480663
	I0116 03:12:36.009237 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | unable to find current IP address of domain embed-certs-480663 in network mk-embed-certs-480663
	I0116 03:12:36.009263 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | I0116 03:12:36.009198 1012490 retry.go:31] will retry after 1.590569463s: waiting for machine to come up
	I0116 03:12:37.601872 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | domain embed-certs-480663 has defined MAC address 52:54:00:1c:0e:bd in network mk-embed-certs-480663
	I0116 03:12:37.602247 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | unable to find current IP address of domain embed-certs-480663 in network mk-embed-certs-480663
	I0116 03:12:37.602280 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | I0116 03:12:37.602215 1012490 retry.go:31] will retry after 1.734508863s: waiting for machine to come up
	I0116 03:12:39.339028 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | domain embed-certs-480663 has defined MAC address 52:54:00:1c:0e:bd in network mk-embed-certs-480663
	I0116 03:12:39.339618 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | unable to find current IP address of domain embed-certs-480663 in network mk-embed-certs-480663
	I0116 03:12:39.339652 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | I0116 03:12:39.339547 1012490 retry.go:31] will retry after 2.357594548s: waiting for machine to come up
	I0116 03:12:41.699172 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | domain embed-certs-480663 has defined MAC address 52:54:00:1c:0e:bd in network mk-embed-certs-480663
	I0116 03:12:41.699607 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | unable to find current IP address of domain embed-certs-480663 in network mk-embed-certs-480663
	I0116 03:12:41.699679 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | I0116 03:12:41.699610 1012490 retry.go:31] will retry after 2.660303994s: waiting for machine to come up
	I0116 03:12:44.362811 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | domain embed-certs-480663 has defined MAC address 52:54:00:1c:0e:bd in network mk-embed-certs-480663
	I0116 03:12:44.363139 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | unable to find current IP address of domain embed-certs-480663 in network mk-embed-certs-480663
	I0116 03:12:44.363173 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | I0116 03:12:44.363109 1012490 retry.go:31] will retry after 3.358505884s: waiting for machine to come up
	I0116 03:12:47.725123 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | domain embed-certs-480663 has defined MAC address 52:54:00:1c:0e:bd in network mk-embed-certs-480663
	I0116 03:12:47.725787 1011501 main.go:141] libmachine: (embed-certs-480663) Found IP for machine: 192.168.61.150
	I0116 03:12:47.725838 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | domain embed-certs-480663 has current primary IP address 192.168.61.150 and MAC address 52:54:00:1c:0e:bd in network mk-embed-certs-480663
	I0116 03:12:47.725847 1011501 main.go:141] libmachine: (embed-certs-480663) Reserving static IP address...
	I0116 03:12:47.726433 1011501 main.go:141] libmachine: (embed-certs-480663) Reserved static IP address: 192.168.61.150
	I0116 03:12:47.726458 1011501 main.go:141] libmachine: (embed-certs-480663) Waiting for SSH to be available...
	I0116 03:12:47.726486 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | found host DHCP lease matching {name: "embed-certs-480663", mac: "52:54:00:1c:0e:bd", ip: "192.168.61.150"} in network mk-embed-certs-480663: {Iface:virbr1 ExpiryTime:2024-01-16 04:04:18 +0000 UTC Type:0 Mac:52:54:00:1c:0e:bd Iaid: IPaddr:192.168.61.150 Prefix:24 Hostname:embed-certs-480663 Clientid:01:52:54:00:1c:0e:bd}
	I0116 03:12:47.726546 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | skip adding static IP to network mk-embed-certs-480663 - found existing host DHCP lease matching {name: "embed-certs-480663", mac: "52:54:00:1c:0e:bd", ip: "192.168.61.150"}
	I0116 03:12:47.726579 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | Getting to WaitForSSH function...
	I0116 03:12:47.728781 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | domain embed-certs-480663 has defined MAC address 52:54:00:1c:0e:bd in network mk-embed-certs-480663
	I0116 03:12:47.729264 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:0e:bd", ip: ""} in network mk-embed-certs-480663: {Iface:virbr1 ExpiryTime:2024-01-16 04:04:18 +0000 UTC Type:0 Mac:52:54:00:1c:0e:bd Iaid: IPaddr:192.168.61.150 Prefix:24 Hostname:embed-certs-480663 Clientid:01:52:54:00:1c:0e:bd}
	I0116 03:12:47.729316 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | domain embed-certs-480663 has defined IP address 192.168.61.150 and MAC address 52:54:00:1c:0e:bd in network mk-embed-certs-480663
	I0116 03:12:47.729447 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | Using SSH client type: external
	I0116 03:12:47.729484 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | Using SSH private key: /home/jenkins/minikube-integration/17967-971255/.minikube/machines/embed-certs-480663/id_rsa (-rw-------)
	I0116 03:12:47.729519 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.150 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17967-971255/.minikube/machines/embed-certs-480663/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0116 03:12:47.729530 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | About to run SSH command:
	I0116 03:12:47.729542 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | exit 0
	I0116 03:12:47.817660 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | SSH cmd err, output: <nil>: 
	I0116 03:12:47.818207 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetConfigRaw
	I0116 03:12:47.818904 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetIP
	I0116 03:12:47.821493 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | domain embed-certs-480663 has defined MAC address 52:54:00:1c:0e:bd in network mk-embed-certs-480663
	I0116 03:12:47.821899 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:0e:bd", ip: ""} in network mk-embed-certs-480663: {Iface:virbr1 ExpiryTime:2024-01-16 04:04:18 +0000 UTC Type:0 Mac:52:54:00:1c:0e:bd Iaid: IPaddr:192.168.61.150 Prefix:24 Hostname:embed-certs-480663 Clientid:01:52:54:00:1c:0e:bd}
	I0116 03:12:47.821938 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | domain embed-certs-480663 has defined IP address 192.168.61.150 and MAC address 52:54:00:1c:0e:bd in network mk-embed-certs-480663
	I0116 03:12:47.822249 1011501 profile.go:148] Saving config to /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/embed-certs-480663/config.json ...
	I0116 03:12:47.822458 1011501 machine.go:88] provisioning docker machine ...
	I0116 03:12:47.822477 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .DriverName
	I0116 03:12:47.822718 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetMachineName
	I0116 03:12:47.822914 1011501 buildroot.go:166] provisioning hostname "embed-certs-480663"
	I0116 03:12:47.822936 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetMachineName
	I0116 03:12:47.823106 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetSSHHostname
	I0116 03:12:47.825414 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | domain embed-certs-480663 has defined MAC address 52:54:00:1c:0e:bd in network mk-embed-certs-480663
	I0116 03:12:47.825772 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:0e:bd", ip: ""} in network mk-embed-certs-480663: {Iface:virbr1 ExpiryTime:2024-01-16 04:04:18 +0000 UTC Type:0 Mac:52:54:00:1c:0e:bd Iaid: IPaddr:192.168.61.150 Prefix:24 Hostname:embed-certs-480663 Clientid:01:52:54:00:1c:0e:bd}
	I0116 03:12:47.825821 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | domain embed-certs-480663 has defined IP address 192.168.61.150 and MAC address 52:54:00:1c:0e:bd in network mk-embed-certs-480663
	I0116 03:12:47.825982 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetSSHPort
	I0116 03:12:47.826176 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetSSHKeyPath
	I0116 03:12:47.826353 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetSSHKeyPath
	I0116 03:12:47.826513 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetSSHUsername
	I0116 03:12:47.826691 1011501 main.go:141] libmachine: Using SSH client type: native
	I0116 03:12:47.827071 1011501 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.61.150 22 <nil> <nil>}
	I0116 03:12:47.827091 1011501 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-480663 && echo "embed-certs-480663" | sudo tee /etc/hostname
	I0116 03:12:47.955360 1011501 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-480663
	
	I0116 03:12:47.955398 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetSSHHostname
	I0116 03:12:47.958259 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | domain embed-certs-480663 has defined MAC address 52:54:00:1c:0e:bd in network mk-embed-certs-480663
	I0116 03:12:47.958575 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:0e:bd", ip: ""} in network mk-embed-certs-480663: {Iface:virbr1 ExpiryTime:2024-01-16 04:04:18 +0000 UTC Type:0 Mac:52:54:00:1c:0e:bd Iaid: IPaddr:192.168.61.150 Prefix:24 Hostname:embed-certs-480663 Clientid:01:52:54:00:1c:0e:bd}
	I0116 03:12:47.958607 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | domain embed-certs-480663 has defined IP address 192.168.61.150 and MAC address 52:54:00:1c:0e:bd in network mk-embed-certs-480663
	I0116 03:12:47.958814 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetSSHPort
	I0116 03:12:47.959044 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetSSHKeyPath
	I0116 03:12:47.959202 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetSSHKeyPath
	I0116 03:12:47.959343 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetSSHUsername
	I0116 03:12:47.959496 1011501 main.go:141] libmachine: Using SSH client type: native
	I0116 03:12:47.959863 1011501 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.61.150 22 <nil> <nil>}
	I0116 03:12:47.959892 1011501 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-480663' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-480663/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-480663' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0116 03:12:48.082423 1011501 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0116 03:12:48.082457 1011501 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17967-971255/.minikube CaCertPath:/home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17967-971255/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17967-971255/.minikube}
	I0116 03:12:48.082515 1011501 buildroot.go:174] setting up certificates
	I0116 03:12:48.082553 1011501 provision.go:83] configureAuth start
	I0116 03:12:48.082569 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetMachineName
	I0116 03:12:48.082866 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetIP
	I0116 03:12:48.085315 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | domain embed-certs-480663 has defined MAC address 52:54:00:1c:0e:bd in network mk-embed-certs-480663
	I0116 03:12:48.085590 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:0e:bd", ip: ""} in network mk-embed-certs-480663: {Iface:virbr1 ExpiryTime:2024-01-16 04:04:18 +0000 UTC Type:0 Mac:52:54:00:1c:0e:bd Iaid: IPaddr:192.168.61.150 Prefix:24 Hostname:embed-certs-480663 Clientid:01:52:54:00:1c:0e:bd}
	I0116 03:12:48.085622 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | domain embed-certs-480663 has defined IP address 192.168.61.150 and MAC address 52:54:00:1c:0e:bd in network mk-embed-certs-480663
	I0116 03:12:48.085766 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetSSHHostname
	I0116 03:12:48.088029 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | domain embed-certs-480663 has defined MAC address 52:54:00:1c:0e:bd in network mk-embed-certs-480663
	I0116 03:12:48.088306 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:0e:bd", ip: ""} in network mk-embed-certs-480663: {Iface:virbr1 ExpiryTime:2024-01-16 04:04:18 +0000 UTC Type:0 Mac:52:54:00:1c:0e:bd Iaid: IPaddr:192.168.61.150 Prefix:24 Hostname:embed-certs-480663 Clientid:01:52:54:00:1c:0e:bd}
	I0116 03:12:48.088331 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | domain embed-certs-480663 has defined IP address 192.168.61.150 and MAC address 52:54:00:1c:0e:bd in network mk-embed-certs-480663
	I0116 03:12:48.088499 1011501 provision.go:138] copyHostCerts
	I0116 03:12:48.088581 1011501 exec_runner.go:144] found /home/jenkins/minikube-integration/17967-971255/.minikube/ca.pem, removing ...
	I0116 03:12:48.088625 1011501 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17967-971255/.minikube/ca.pem
	I0116 03:12:48.088713 1011501 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17967-971255/.minikube/ca.pem (1082 bytes)
	I0116 03:12:48.088856 1011501 exec_runner.go:144] found /home/jenkins/minikube-integration/17967-971255/.minikube/cert.pem, removing ...
	I0116 03:12:48.088866 1011501 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17967-971255/.minikube/cert.pem
	I0116 03:12:48.088903 1011501 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17967-971255/.minikube/cert.pem (1123 bytes)
	I0116 03:12:48.088981 1011501 exec_runner.go:144] found /home/jenkins/minikube-integration/17967-971255/.minikube/key.pem, removing ...
	I0116 03:12:48.088996 1011501 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17967-971255/.minikube/key.pem
	I0116 03:12:48.089030 1011501 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17967-971255/.minikube/key.pem (1675 bytes)
	I0116 03:12:48.089101 1011501 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17967-971255/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca-key.pem org=jenkins.embed-certs-480663 san=[192.168.61.150 192.168.61.150 localhost 127.0.0.1 minikube embed-certs-480663]
	I0116 03:12:48.160830 1011501 provision.go:172] copyRemoteCerts
	I0116 03:12:48.160903 1011501 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0116 03:12:48.160965 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetSSHHostname
	I0116 03:12:48.163939 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | domain embed-certs-480663 has defined MAC address 52:54:00:1c:0e:bd in network mk-embed-certs-480663
	I0116 03:12:48.164277 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:0e:bd", ip: ""} in network mk-embed-certs-480663: {Iface:virbr1 ExpiryTime:2024-01-16 04:04:18 +0000 UTC Type:0 Mac:52:54:00:1c:0e:bd Iaid: IPaddr:192.168.61.150 Prefix:24 Hostname:embed-certs-480663 Clientid:01:52:54:00:1c:0e:bd}
	I0116 03:12:48.164307 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | domain embed-certs-480663 has defined IP address 192.168.61.150 and MAC address 52:54:00:1c:0e:bd in network mk-embed-certs-480663
	I0116 03:12:48.164531 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetSSHPort
	I0116 03:12:48.164805 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetSSHKeyPath
	I0116 03:12:48.165006 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetSSHUsername
	I0116 03:12:48.165166 1011501 sshutil.go:53] new ssh client: &{IP:192.168.61.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/embed-certs-480663/id_rsa Username:docker}
	I0116 03:12:48.256101 1011501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0116 03:12:48.280042 1011501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0116 03:12:48.303724 1011501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0116 03:12:48.326468 1011501 provision.go:86] duration metric: configureAuth took 243.88726ms
	I0116 03:12:48.326506 1011501 buildroot.go:189] setting minikube options for container-runtime
	I0116 03:12:48.326754 1011501 config.go:182] Loaded profile config "embed-certs-480663": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 03:12:48.326876 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetSSHHostname
	I0116 03:12:48.329344 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | domain embed-certs-480663 has defined MAC address 52:54:00:1c:0e:bd in network mk-embed-certs-480663
	I0116 03:12:48.329821 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:0e:bd", ip: ""} in network mk-embed-certs-480663: {Iface:virbr1 ExpiryTime:2024-01-16 04:04:18 +0000 UTC Type:0 Mac:52:54:00:1c:0e:bd Iaid: IPaddr:192.168.61.150 Prefix:24 Hostname:embed-certs-480663 Clientid:01:52:54:00:1c:0e:bd}
	I0116 03:12:48.329859 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | domain embed-certs-480663 has defined IP address 192.168.61.150 and MAC address 52:54:00:1c:0e:bd in network mk-embed-certs-480663
	I0116 03:12:48.329995 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetSSHPort
	I0116 03:12:48.330217 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetSSHKeyPath
	I0116 03:12:48.330434 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetSSHKeyPath
	I0116 03:12:48.330590 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetSSHUsername
	I0116 03:12:48.330744 1011501 main.go:141] libmachine: Using SSH client type: native
	I0116 03:12:48.331080 1011501 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.61.150 22 <nil> <nil>}
	I0116 03:12:48.331099 1011501 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0116 03:12:48.635409 1011501 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0116 03:12:48.635460 1011501 machine.go:91] provisioned docker machine in 812.972689ms
	I0116 03:12:48.635473 1011501 start.go:300] post-start starting for "embed-certs-480663" (driver="kvm2")
	I0116 03:12:48.635489 1011501 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0116 03:12:48.635520 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .DriverName
	I0116 03:12:48.635975 1011501 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0116 03:12:48.636005 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetSSHHostname
	I0116 03:12:48.638568 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | domain embed-certs-480663 has defined MAC address 52:54:00:1c:0e:bd in network mk-embed-certs-480663
	I0116 03:12:48.638912 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:0e:bd", ip: ""} in network mk-embed-certs-480663: {Iface:virbr1 ExpiryTime:2024-01-16 04:04:18 +0000 UTC Type:0 Mac:52:54:00:1c:0e:bd Iaid: IPaddr:192.168.61.150 Prefix:24 Hostname:embed-certs-480663 Clientid:01:52:54:00:1c:0e:bd}
	I0116 03:12:48.638947 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | domain embed-certs-480663 has defined IP address 192.168.61.150 and MAC address 52:54:00:1c:0e:bd in network mk-embed-certs-480663
	I0116 03:12:48.639052 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetSSHPort
	I0116 03:12:48.639272 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetSSHKeyPath
	I0116 03:12:48.639448 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetSSHUsername
	I0116 03:12:48.639608 1011501 sshutil.go:53] new ssh client: &{IP:192.168.61.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/embed-certs-480663/id_rsa Username:docker}
	I0116 03:12:48.729202 1011501 ssh_runner.go:195] Run: cat /etc/os-release
	I0116 03:12:48.733911 1011501 info.go:137] Remote host: Buildroot 2021.02.12
	I0116 03:12:48.733985 1011501 filesync.go:126] Scanning /home/jenkins/minikube-integration/17967-971255/.minikube/addons for local assets ...
	I0116 03:12:48.734062 1011501 filesync.go:126] Scanning /home/jenkins/minikube-integration/17967-971255/.minikube/files for local assets ...
	I0116 03:12:48.734185 1011501 filesync.go:149] local asset: /home/jenkins/minikube-integration/17967-971255/.minikube/files/etc/ssl/certs/9784822.pem -> 9784822.pem in /etc/ssl/certs
	I0116 03:12:48.734437 1011501 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0116 03:12:48.744474 1011501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/files/etc/ssl/certs/9784822.pem --> /etc/ssl/certs/9784822.pem (1708 bytes)
	I0116 03:12:48.767453 1011501 start.go:303] post-start completed in 131.962731ms
	I0116 03:12:48.767483 1011501 fix.go:56] fixHost completed within 18.973054797s
	I0116 03:12:48.767537 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetSSHHostname
	I0116 03:12:48.770091 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | domain embed-certs-480663 has defined MAC address 52:54:00:1c:0e:bd in network mk-embed-certs-480663
	I0116 03:12:48.770364 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:0e:bd", ip: ""} in network mk-embed-certs-480663: {Iface:virbr1 ExpiryTime:2024-01-16 04:04:18 +0000 UTC Type:0 Mac:52:54:00:1c:0e:bd Iaid: IPaddr:192.168.61.150 Prefix:24 Hostname:embed-certs-480663 Clientid:01:52:54:00:1c:0e:bd}
	I0116 03:12:48.770410 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | domain embed-certs-480663 has defined IP address 192.168.61.150 and MAC address 52:54:00:1c:0e:bd in network mk-embed-certs-480663
	I0116 03:12:48.770516 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetSSHPort
	I0116 03:12:48.770700 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetSSHKeyPath
	I0116 03:12:48.770885 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetSSHKeyPath
	I0116 03:12:48.771062 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetSSHUsername
	I0116 03:12:48.771258 1011501 main.go:141] libmachine: Using SSH client type: native
	I0116 03:12:48.771725 1011501 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.61.150 22 <nil> <nil>}
	I0116 03:12:48.771743 1011501 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0116 03:12:48.886832 1011681 start.go:369] acquired machines lock for "old-k8s-version-788237" in 4m28.568927849s
	I0116 03:12:48.886918 1011681 start.go:96] Skipping create...Using existing machine configuration
	I0116 03:12:48.886930 1011681 fix.go:54] fixHost starting: 
	I0116 03:12:48.887453 1011681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:12:48.887501 1011681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:12:48.904045 1011681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43423
	I0116 03:12:48.904557 1011681 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:12:48.905072 1011681 main.go:141] libmachine: Using API Version  1
	I0116 03:12:48.905099 1011681 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:12:48.905518 1011681 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:12:48.905746 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .DriverName
	I0116 03:12:48.905912 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetState
	I0116 03:12:48.907596 1011681 fix.go:102] recreateIfNeeded on old-k8s-version-788237: state=Stopped err=<nil>
	I0116 03:12:48.907628 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .DriverName
	W0116 03:12:48.907820 1011681 fix.go:128] unexpected machine state, will restart: <nil>
	I0116 03:12:48.909761 1011681 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-788237" ...
	I0116 03:12:48.911234 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .Start
	I0116 03:12:48.911413 1011681 main.go:141] libmachine: (old-k8s-version-788237) Ensuring networks are active...
	I0116 03:12:48.912247 1011681 main.go:141] libmachine: (old-k8s-version-788237) Ensuring network default is active
	I0116 03:12:48.912596 1011681 main.go:141] libmachine: (old-k8s-version-788237) Ensuring network mk-old-k8s-version-788237 is active
	I0116 03:12:48.913077 1011681 main.go:141] libmachine: (old-k8s-version-788237) Getting domain xml...
	I0116 03:12:48.913678 1011681 main.go:141] libmachine: (old-k8s-version-788237) Creating domain...
	I0116 03:12:50.157059 1011681 main.go:141] libmachine: (old-k8s-version-788237) Waiting to get IP...
	I0116 03:12:50.158170 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | domain old-k8s-version-788237 has defined MAC address 52:54:00:64:b7:2e in network mk-old-k8s-version-788237
	I0116 03:12:50.158626 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | unable to find current IP address of domain old-k8s-version-788237 in network mk-old-k8s-version-788237
	I0116 03:12:50.158723 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | I0116 03:12:50.158597 1012611 retry.go:31] will retry after 219.259678ms: waiting for machine to come up
	I0116 03:12:48.886627 1011501 main.go:141] libmachine: SSH cmd err, output: <nil>: 1705374768.861682880
	
	I0116 03:12:48.886687 1011501 fix.go:206] guest clock: 1705374768.861682880
	I0116 03:12:48.886698 1011501 fix.go:219] Guest: 2024-01-16 03:12:48.86168288 +0000 UTC Remote: 2024-01-16 03:12:48.767487292 +0000 UTC m=+294.991502995 (delta=94.195588ms)
	I0116 03:12:48.886721 1011501 fix.go:190] guest clock delta is within tolerance: 94.195588ms
	I0116 03:12:48.886726 1011501 start.go:83] releasing machines lock for "embed-certs-480663", held for 19.09234257s
	I0116 03:12:48.886751 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .DriverName
	I0116 03:12:48.887062 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetIP
	I0116 03:12:48.889754 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | domain embed-certs-480663 has defined MAC address 52:54:00:1c:0e:bd in network mk-embed-certs-480663
	I0116 03:12:48.890098 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:0e:bd", ip: ""} in network mk-embed-certs-480663: {Iface:virbr1 ExpiryTime:2024-01-16 04:04:18 +0000 UTC Type:0 Mac:52:54:00:1c:0e:bd Iaid: IPaddr:192.168.61.150 Prefix:24 Hostname:embed-certs-480663 Clientid:01:52:54:00:1c:0e:bd}
	I0116 03:12:48.890128 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | domain embed-certs-480663 has defined IP address 192.168.61.150 and MAC address 52:54:00:1c:0e:bd in network mk-embed-certs-480663
	I0116 03:12:48.890347 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .DriverName
	I0116 03:12:48.890906 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .DriverName
	I0116 03:12:48.891124 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .DriverName
	I0116 03:12:48.891223 1011501 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0116 03:12:48.891269 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetSSHHostname
	I0116 03:12:48.891451 1011501 ssh_runner.go:195] Run: cat /version.json
	I0116 03:12:48.891477 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetSSHHostname
	I0116 03:12:48.894134 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | domain embed-certs-480663 has defined MAC address 52:54:00:1c:0e:bd in network mk-embed-certs-480663
	I0116 03:12:48.894220 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | domain embed-certs-480663 has defined MAC address 52:54:00:1c:0e:bd in network mk-embed-certs-480663
	I0116 03:12:48.894577 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:0e:bd", ip: ""} in network mk-embed-certs-480663: {Iface:virbr1 ExpiryTime:2024-01-16 04:04:18 +0000 UTC Type:0 Mac:52:54:00:1c:0e:bd Iaid: IPaddr:192.168.61.150 Prefix:24 Hostname:embed-certs-480663 Clientid:01:52:54:00:1c:0e:bd}
	I0116 03:12:48.894619 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | domain embed-certs-480663 has defined IP address 192.168.61.150 and MAC address 52:54:00:1c:0e:bd in network mk-embed-certs-480663
	I0116 03:12:48.894646 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:0e:bd", ip: ""} in network mk-embed-certs-480663: {Iface:virbr1 ExpiryTime:2024-01-16 04:04:18 +0000 UTC Type:0 Mac:52:54:00:1c:0e:bd Iaid: IPaddr:192.168.61.150 Prefix:24 Hostname:embed-certs-480663 Clientid:01:52:54:00:1c:0e:bd}
	I0116 03:12:48.894672 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | domain embed-certs-480663 has defined IP address 192.168.61.150 and MAC address 52:54:00:1c:0e:bd in network mk-embed-certs-480663
	I0116 03:12:48.894934 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetSSHPort
	I0116 03:12:48.894944 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetSSHPort
	I0116 03:12:48.895100 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetSSHKeyPath
	I0116 03:12:48.895122 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetSSHKeyPath
	I0116 03:12:48.895200 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetSSHUsername
	I0116 03:12:48.895270 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetSSHUsername
	I0116 03:12:48.895367 1011501 sshutil.go:53] new ssh client: &{IP:192.168.61.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/embed-certs-480663/id_rsa Username:docker}
	I0116 03:12:48.895401 1011501 sshutil.go:53] new ssh client: &{IP:192.168.61.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/embed-certs-480663/id_rsa Username:docker}
	I0116 03:12:48.979839 1011501 ssh_runner.go:195] Run: systemctl --version
	I0116 03:12:49.008683 1011501 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0116 03:12:49.161550 1011501 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0116 03:12:49.167838 1011501 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0116 03:12:49.167937 1011501 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0116 03:12:49.184428 1011501 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0116 03:12:49.184457 1011501 start.go:475] detecting cgroup driver to use...
	I0116 03:12:49.184542 1011501 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0116 03:12:49.202177 1011501 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0116 03:12:49.215021 1011501 docker.go:217] disabling cri-docker service (if available) ...
	I0116 03:12:49.215100 1011501 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0116 03:12:49.230944 1011501 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0116 03:12:49.245401 1011501 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0116 03:12:49.368410 1011501 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0116 03:12:49.490710 1011501 docker.go:233] disabling docker service ...
	I0116 03:12:49.490804 1011501 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0116 03:12:49.504462 1011501 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0116 03:12:49.515523 1011501 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0116 03:12:49.632751 1011501 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0116 03:12:49.769999 1011501 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0116 03:12:49.785053 1011501 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0116 03:12:49.803377 1011501 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0116 03:12:49.803436 1011501 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:12:49.812729 1011501 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0116 03:12:49.812804 1011501 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:12:49.822106 1011501 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:12:49.831270 1011501 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:12:49.840256 1011501 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0116 03:12:49.849610 1011501 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0116 03:12:49.858638 1011501 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0116 03:12:49.858713 1011501 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0116 03:12:49.872437 1011501 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0116 03:12:49.882932 1011501 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0116 03:12:50.003747 1011501 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0116 03:12:50.178808 1011501 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0116 03:12:50.178901 1011501 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0116 03:12:50.184631 1011501 start.go:543] Will wait 60s for crictl version
	I0116 03:12:50.184708 1011501 ssh_runner.go:195] Run: which crictl
	I0116 03:12:50.189104 1011501 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0116 03:12:50.226713 1011501 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0116 03:12:50.226833 1011501 ssh_runner.go:195] Run: crio --version
	I0116 03:12:50.285581 1011501 ssh_runner.go:195] Run: crio --version
	I0116 03:12:50.336274 1011501 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0116 03:12:50.337928 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetIP
	I0116 03:12:50.340938 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | domain embed-certs-480663 has defined MAC address 52:54:00:1c:0e:bd in network mk-embed-certs-480663
	I0116 03:12:50.341389 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:0e:bd", ip: ""} in network mk-embed-certs-480663: {Iface:virbr1 ExpiryTime:2024-01-16 04:04:18 +0000 UTC Type:0 Mac:52:54:00:1c:0e:bd Iaid: IPaddr:192.168.61.150 Prefix:24 Hostname:embed-certs-480663 Clientid:01:52:54:00:1c:0e:bd}
	I0116 03:12:50.341434 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | domain embed-certs-480663 has defined IP address 192.168.61.150 and MAC address 52:54:00:1c:0e:bd in network mk-embed-certs-480663
	I0116 03:12:50.341707 1011501 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0116 03:12:50.346116 1011501 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 03:12:50.358498 1011501 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0116 03:12:50.358562 1011501 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 03:12:50.399016 1011501 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0116 03:12:50.399102 1011501 ssh_runner.go:195] Run: which lz4
	I0116 03:12:50.403562 1011501 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0116 03:12:50.407754 1011501 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0116 03:12:50.407781 1011501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0116 03:12:52.338554 1011501 crio.go:444] Took 1.935021 seconds to copy over tarball
	I0116 03:12:52.338657 1011501 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0116 03:12:50.379220 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | domain old-k8s-version-788237 has defined MAC address 52:54:00:64:b7:2e in network mk-old-k8s-version-788237
	I0116 03:12:50.379668 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | unable to find current IP address of domain old-k8s-version-788237 in network mk-old-k8s-version-788237
	I0116 03:12:50.379707 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | I0116 03:12:50.379617 1012611 retry.go:31] will retry after 265.569137ms: waiting for machine to come up
	I0116 03:12:50.647311 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | domain old-k8s-version-788237 has defined MAC address 52:54:00:64:b7:2e in network mk-old-k8s-version-788237
	I0116 03:12:50.648272 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | unable to find current IP address of domain old-k8s-version-788237 in network mk-old-k8s-version-788237
	I0116 03:12:50.648308 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | I0116 03:12:50.648165 1012611 retry.go:31] will retry after 322.357919ms: waiting for machine to come up
	I0116 03:12:50.971860 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | domain old-k8s-version-788237 has defined MAC address 52:54:00:64:b7:2e in network mk-old-k8s-version-788237
	I0116 03:12:50.972437 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | unable to find current IP address of domain old-k8s-version-788237 in network mk-old-k8s-version-788237
	I0116 03:12:50.972466 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | I0116 03:12:50.972414 1012611 retry.go:31] will retry after 554.899929ms: waiting for machine to come up
	I0116 03:12:51.529304 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | domain old-k8s-version-788237 has defined MAC address 52:54:00:64:b7:2e in network mk-old-k8s-version-788237
	I0116 03:12:51.529854 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | unable to find current IP address of domain old-k8s-version-788237 in network mk-old-k8s-version-788237
	I0116 03:12:51.529881 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | I0116 03:12:51.529781 1012611 retry.go:31] will retry after 666.131492ms: waiting for machine to come up
	I0116 03:12:52.197244 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | domain old-k8s-version-788237 has defined MAC address 52:54:00:64:b7:2e in network mk-old-k8s-version-788237
	I0116 03:12:52.197715 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | unable to find current IP address of domain old-k8s-version-788237 in network mk-old-k8s-version-788237
	I0116 03:12:52.197747 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | I0116 03:12:52.197677 1012611 retry.go:31] will retry after 905.276637ms: waiting for machine to come up
	I0116 03:12:53.104496 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | domain old-k8s-version-788237 has defined MAC address 52:54:00:64:b7:2e in network mk-old-k8s-version-788237
	I0116 03:12:53.105075 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | unable to find current IP address of domain old-k8s-version-788237 in network mk-old-k8s-version-788237
	I0116 03:12:53.105113 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | I0116 03:12:53.105018 1012611 retry.go:31] will retry after 849.59257ms: waiting for machine to come up
	I0116 03:12:53.956756 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | domain old-k8s-version-788237 has defined MAC address 52:54:00:64:b7:2e in network mk-old-k8s-version-788237
	I0116 03:12:53.957265 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | unable to find current IP address of domain old-k8s-version-788237 in network mk-old-k8s-version-788237
	I0116 03:12:53.957310 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | I0116 03:12:53.957214 1012611 retry.go:31] will retry after 1.208772763s: waiting for machine to come up
	I0116 03:12:55.168258 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | domain old-k8s-version-788237 has defined MAC address 52:54:00:64:b7:2e in network mk-old-k8s-version-788237
	I0116 03:12:55.168715 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | unable to find current IP address of domain old-k8s-version-788237 in network mk-old-k8s-version-788237
	I0116 03:12:55.168750 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | I0116 03:12:55.168656 1012611 retry.go:31] will retry after 1.842317385s: waiting for machine to come up
	I0116 03:12:55.368146 1011501 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.02945237s)
	I0116 03:12:55.368186 1011501 crio.go:451] Took 3.029602 seconds to extract the tarball
	I0116 03:12:55.368197 1011501 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0116 03:12:55.409542 1011501 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 03:12:55.468263 1011501 crio.go:496] all images are preloaded for cri-o runtime.
	I0116 03:12:55.468298 1011501 cache_images.go:84] Images are preloaded, skipping loading
	I0116 03:12:55.468401 1011501 ssh_runner.go:195] Run: crio config
	I0116 03:12:55.534437 1011501 cni.go:84] Creating CNI manager for ""
	I0116 03:12:55.534473 1011501 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 03:12:55.534500 1011501 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0116 03:12:55.534554 1011501 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.150 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-480663 NodeName:embed-certs-480663 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.150"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.150 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0116 03:12:55.534761 1011501 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.150
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-480663"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.150
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.150"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0116 03:12:55.534856 1011501 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-480663 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.150
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-480663 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0116 03:12:55.534953 1011501 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0116 03:12:55.550549 1011501 binaries.go:44] Found k8s binaries, skipping transfer
	I0116 03:12:55.550643 1011501 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0116 03:12:55.560831 1011501 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0116 03:12:55.578611 1011501 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0116 03:12:55.600405 1011501 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I0116 03:12:55.620622 1011501 ssh_runner.go:195] Run: grep 192.168.61.150	control-plane.minikube.internal$ /etc/hosts
	I0116 03:12:55.625483 1011501 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.150	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 03:12:55.638353 1011501 certs.go:56] Setting up /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/embed-certs-480663 for IP: 192.168.61.150
	I0116 03:12:55.638404 1011501 certs.go:190] acquiring lock for shared ca certs: {Name:mk7c5920652340a030ac1e36e0fe21cc8437c491 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:12:55.638588 1011501 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17967-971255/.minikube/ca.key
	I0116 03:12:55.638649 1011501 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17967-971255/.minikube/proxy-client-ca.key
	I0116 03:12:55.638772 1011501 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/embed-certs-480663/client.key
	I0116 03:12:55.638852 1011501 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/embed-certs-480663/apiserver.key.2512ac4f
	I0116 03:12:55.638933 1011501 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/embed-certs-480663/proxy-client.key
	I0116 03:12:55.639122 1011501 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/home/jenkins/minikube-integration/17967-971255/.minikube/certs/978482.pem (1338 bytes)
	W0116 03:12:55.639164 1011501 certs.go:433] ignoring /home/jenkins/minikube-integration/17967-971255/.minikube/certs/home/jenkins/minikube-integration/17967-971255/.minikube/certs/978482_empty.pem, impossibly tiny 0 bytes
	I0116 03:12:55.639180 1011501 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca-key.pem (1675 bytes)
	I0116 03:12:55.639217 1011501 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca.pem (1082 bytes)
	I0116 03:12:55.639254 1011501 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/home/jenkins/minikube-integration/17967-971255/.minikube/certs/cert.pem (1123 bytes)
	I0116 03:12:55.639286 1011501 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/home/jenkins/minikube-integration/17967-971255/.minikube/certs/key.pem (1675 bytes)
	I0116 03:12:55.639341 1011501 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-971255/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17967-971255/.minikube/files/etc/ssl/certs/9784822.pem (1708 bytes)
	I0116 03:12:55.640395 1011501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/embed-certs-480663/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0116 03:12:55.667612 1011501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/embed-certs-480663/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0116 03:12:55.692576 1011501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/embed-certs-480663/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0116 03:12:55.717257 1011501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/embed-certs-480663/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0116 03:12:55.741983 1011501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0116 03:12:55.766577 1011501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0116 03:12:55.792372 1011501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0116 03:12:55.817385 1011501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0116 03:12:55.843037 1011501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/files/etc/ssl/certs/9784822.pem --> /usr/share/ca-certificates/9784822.pem (1708 bytes)
	I0116 03:12:55.873486 1011501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0116 03:12:55.898499 1011501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/certs/978482.pem --> /usr/share/ca-certificates/978482.pem (1338 bytes)
	I0116 03:12:55.925406 1011501 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0116 03:12:55.945389 1011501 ssh_runner.go:195] Run: openssl version
	I0116 03:12:55.951579 1011501 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9784822.pem && ln -fs /usr/share/ca-certificates/9784822.pem /etc/ssl/certs/9784822.pem"
	I0116 03:12:55.963228 1011501 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9784822.pem
	I0116 03:12:55.968375 1011501 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 16 02:10 /usr/share/ca-certificates/9784822.pem
	I0116 03:12:55.968448 1011501 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9784822.pem
	I0116 03:12:55.974792 1011501 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9784822.pem /etc/ssl/certs/3ec20f2e.0"
	I0116 03:12:55.986496 1011501 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0116 03:12:55.998112 1011501 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:12:56.003308 1011501 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 16 02:01 /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:12:56.003397 1011501 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:12:56.009406 1011501 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0116 03:12:56.022123 1011501 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/978482.pem && ln -fs /usr/share/ca-certificates/978482.pem /etc/ssl/certs/978482.pem"
	I0116 03:12:56.035041 1011501 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/978482.pem
	I0116 03:12:56.040564 1011501 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 16 02:10 /usr/share/ca-certificates/978482.pem
	I0116 03:12:56.040636 1011501 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/978482.pem
	I0116 03:12:56.047058 1011501 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/978482.pem /etc/ssl/certs/51391683.0"
	I0116 03:12:56.059998 1011501 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0116 03:12:56.065241 1011501 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0116 03:12:56.071918 1011501 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0116 03:12:56.078512 1011501 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0116 03:12:56.085645 1011501 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0116 03:12:56.092405 1011501 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0116 03:12:56.099010 1011501 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0116 03:12:56.105679 1011501 kubeadm.go:404] StartCluster: {Name:embed-certs-480663 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.4 ClusterName:embed-certs-480663 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.150 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 03:12:56.105773 1011501 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0116 03:12:56.105859 1011501 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 03:12:56.153053 1011501 cri.go:89] found id: ""
	I0116 03:12:56.153168 1011501 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0116 03:12:56.165415 1011501 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0116 03:12:56.165448 1011501 kubeadm.go:636] restartCluster start
	I0116 03:12:56.165516 1011501 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0116 03:12:56.175884 1011501 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:12:56.177147 1011501 kubeconfig.go:92] found "embed-certs-480663" server: "https://192.168.61.150:8443"
	I0116 03:12:56.179924 1011501 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0116 03:12:56.189868 1011501 api_server.go:166] Checking apiserver status ...
	I0116 03:12:56.189935 1011501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:12:56.202554 1011501 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:12:56.690001 1011501 api_server.go:166] Checking apiserver status ...
	I0116 03:12:56.690087 1011501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:12:56.702873 1011501 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:12:57.190439 1011501 api_server.go:166] Checking apiserver status ...
	I0116 03:12:57.190526 1011501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:12:57.203483 1011501 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:12:57.691004 1011501 api_server.go:166] Checking apiserver status ...
	I0116 03:12:57.691089 1011501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:12:57.705628 1011501 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:12:58.190127 1011501 api_server.go:166] Checking apiserver status ...
	I0116 03:12:58.190268 1011501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:12:58.203066 1011501 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:12:58.690714 1011501 api_server.go:166] Checking apiserver status ...
	I0116 03:12:58.690836 1011501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:12:58.703512 1011501 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:12:57.013734 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | domain old-k8s-version-788237 has defined MAC address 52:54:00:64:b7:2e in network mk-old-k8s-version-788237
	I0116 03:12:57.014338 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | unable to find current IP address of domain old-k8s-version-788237 in network mk-old-k8s-version-788237
	I0116 03:12:57.014374 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | I0116 03:12:57.014291 1012611 retry.go:31] will retry after 1.812964487s: waiting for machine to come up
	I0116 03:12:58.828551 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | domain old-k8s-version-788237 has defined MAC address 52:54:00:64:b7:2e in network mk-old-k8s-version-788237
	I0116 03:12:58.829042 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | unable to find current IP address of domain old-k8s-version-788237 in network mk-old-k8s-version-788237
	I0116 03:12:58.829068 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | I0116 03:12:58.828972 1012611 retry.go:31] will retry after 2.844481084s: waiting for machine to come up
	I0116 03:12:59.190193 1011501 api_server.go:166] Checking apiserver status ...
	I0116 03:12:59.190305 1011501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:12:59.202672 1011501 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:12:59.690192 1011501 api_server.go:166] Checking apiserver status ...
	I0116 03:12:59.690304 1011501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:12:59.702988 1011501 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:00.190097 1011501 api_server.go:166] Checking apiserver status ...
	I0116 03:13:00.190194 1011501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:00.202817 1011501 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:00.690356 1011501 api_server.go:166] Checking apiserver status ...
	I0116 03:13:00.690469 1011501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:00.703381 1011501 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:01.190016 1011501 api_server.go:166] Checking apiserver status ...
	I0116 03:13:01.190103 1011501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:01.205508 1011501 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:01.689888 1011501 api_server.go:166] Checking apiserver status ...
	I0116 03:13:01.689982 1011501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:01.706681 1011501 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:02.190049 1011501 api_server.go:166] Checking apiserver status ...
	I0116 03:13:02.190151 1011501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:02.206668 1011501 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:02.690222 1011501 api_server.go:166] Checking apiserver status ...
	I0116 03:13:02.690361 1011501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:02.706881 1011501 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:03.189909 1011501 api_server.go:166] Checking apiserver status ...
	I0116 03:13:03.190004 1011501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:03.203138 1011501 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:03.690789 1011501 api_server.go:166] Checking apiserver status ...
	I0116 03:13:03.690907 1011501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:03.703489 1011501 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:01.674784 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | domain old-k8s-version-788237 has defined MAC address 52:54:00:64:b7:2e in network mk-old-k8s-version-788237
	I0116 03:13:01.675368 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | unable to find current IP address of domain old-k8s-version-788237 in network mk-old-k8s-version-788237
	I0116 03:13:01.675395 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | I0116 03:13:01.675337 1012611 retry.go:31] will retry after 3.198176955s: waiting for machine to come up
	I0116 03:13:04.875399 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | domain old-k8s-version-788237 has defined MAC address 52:54:00:64:b7:2e in network mk-old-k8s-version-788237
	I0116 03:13:04.875880 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | unable to find current IP address of domain old-k8s-version-788237 in network mk-old-k8s-version-788237
	I0116 03:13:04.875911 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | I0116 03:13:04.875824 1012611 retry.go:31] will retry after 3.762316841s: waiting for machine to come up
	I0116 03:13:04.190804 1011501 api_server.go:166] Checking apiserver status ...
	I0116 03:13:04.190926 1011501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:04.203114 1011501 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:04.690805 1011501 api_server.go:166] Checking apiserver status ...
	I0116 03:13:04.690935 1011501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:04.703456 1011501 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:05.190648 1011501 api_server.go:166] Checking apiserver status ...
	I0116 03:13:05.190760 1011501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:05.203129 1011501 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:05.690744 1011501 api_server.go:166] Checking apiserver status ...
	I0116 03:13:05.690892 1011501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:05.703526 1011501 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:06.190070 1011501 api_server.go:166] Checking apiserver status ...
	I0116 03:13:06.190217 1011501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:06.202457 1011501 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:06.202494 1011501 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0116 03:13:06.202504 1011501 kubeadm.go:1135] stopping kube-system containers ...
	I0116 03:13:06.202517 1011501 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0116 03:13:06.202598 1011501 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 03:13:06.241146 1011501 cri.go:89] found id: ""
	I0116 03:13:06.241255 1011501 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0116 03:13:06.257465 1011501 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0116 03:13:06.267655 1011501 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0116 03:13:06.267728 1011501 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0116 03:13:06.277601 1011501 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0116 03:13:06.277628 1011501 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:13:06.388578 1011501 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:13:07.024945 1011501 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:13:07.210419 1011501 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:13:07.275175 1011501 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:13:07.353969 1011501 api_server.go:52] waiting for apiserver process to appear ...
	I0116 03:13:07.354074 1011501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:13:07.854253 1011501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:13:08.354855 1011501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:13:10.035188 1011955 start.go:369] acquired machines lock for "default-k8s-diff-port-775571" in 4m14.402660122s
	I0116 03:13:10.035270 1011955 start.go:96] Skipping create...Using existing machine configuration
	I0116 03:13:10.035278 1011955 fix.go:54] fixHost starting: 
	I0116 03:13:10.035719 1011955 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:13:10.035767 1011955 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:13:10.054435 1011955 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35337
	I0116 03:13:10.054968 1011955 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:13:10.055812 1011955 main.go:141] libmachine: Using API Version  1
	I0116 03:13:10.055849 1011955 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:13:10.056304 1011955 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:13:10.056546 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .DriverName
	I0116 03:13:10.056719 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetState
	I0116 03:13:10.058431 1011955 fix.go:102] recreateIfNeeded on default-k8s-diff-port-775571: state=Stopped err=<nil>
	I0116 03:13:10.058467 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .DriverName
	W0116 03:13:10.058666 1011955 fix.go:128] unexpected machine state, will restart: <nil>
	I0116 03:13:10.060742 1011955 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-775571" ...
	I0116 03:13:08.642785 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | domain old-k8s-version-788237 has defined MAC address 52:54:00:64:b7:2e in network mk-old-k8s-version-788237
	I0116 03:13:08.643327 1011681 main.go:141] libmachine: (old-k8s-version-788237) Found IP for machine: 192.168.39.91
	I0116 03:13:08.643356 1011681 main.go:141] libmachine: (old-k8s-version-788237) Reserving static IP address...
	I0116 03:13:08.643376 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | domain old-k8s-version-788237 has current primary IP address 192.168.39.91 and MAC address 52:54:00:64:b7:2e in network mk-old-k8s-version-788237
	I0116 03:13:08.643757 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | found host DHCP lease matching {name: "old-k8s-version-788237", mac: "52:54:00:64:b7:2e", ip: "192.168.39.91"} in network mk-old-k8s-version-788237: {Iface:virbr3 ExpiryTime:2024-01-16 04:13:01 +0000 UTC Type:0 Mac:52:54:00:64:b7:2e Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:old-k8s-version-788237 Clientid:01:52:54:00:64:b7:2e}
	I0116 03:13:08.643780 1011681 main.go:141] libmachine: (old-k8s-version-788237) Reserved static IP address: 192.168.39.91
	I0116 03:13:08.643798 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | skip adding static IP to network mk-old-k8s-version-788237 - found existing host DHCP lease matching {name: "old-k8s-version-788237", mac: "52:54:00:64:b7:2e", ip: "192.168.39.91"}
	I0116 03:13:08.643810 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | Getting to WaitForSSH function...
	I0116 03:13:08.643819 1011681 main.go:141] libmachine: (old-k8s-version-788237) Waiting for SSH to be available...
	I0116 03:13:08.646037 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | domain old-k8s-version-788237 has defined MAC address 52:54:00:64:b7:2e in network mk-old-k8s-version-788237
	I0116 03:13:08.646391 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:b7:2e", ip: ""} in network mk-old-k8s-version-788237: {Iface:virbr3 ExpiryTime:2024-01-16 04:13:01 +0000 UTC Type:0 Mac:52:54:00:64:b7:2e Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:old-k8s-version-788237 Clientid:01:52:54:00:64:b7:2e}
	I0116 03:13:08.646437 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | domain old-k8s-version-788237 has defined IP address 192.168.39.91 and MAC address 52:54:00:64:b7:2e in network mk-old-k8s-version-788237
	I0116 03:13:08.646519 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | Using SSH client type: external
	I0116 03:13:08.646553 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | Using SSH private key: /home/jenkins/minikube-integration/17967-971255/.minikube/machines/old-k8s-version-788237/id_rsa (-rw-------)
	I0116 03:13:08.646581 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.91 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17967-971255/.minikube/machines/old-k8s-version-788237/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0116 03:13:08.646591 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | About to run SSH command:
	I0116 03:13:08.646599 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | exit 0
	I0116 03:13:08.738009 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | SSH cmd err, output: <nil>: 
	I0116 03:13:08.738363 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetConfigRaw
	I0116 03:13:08.739116 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetIP
	I0116 03:13:08.741759 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | domain old-k8s-version-788237 has defined MAC address 52:54:00:64:b7:2e in network mk-old-k8s-version-788237
	I0116 03:13:08.742196 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:b7:2e", ip: ""} in network mk-old-k8s-version-788237: {Iface:virbr3 ExpiryTime:2024-01-16 04:13:01 +0000 UTC Type:0 Mac:52:54:00:64:b7:2e Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:old-k8s-version-788237 Clientid:01:52:54:00:64:b7:2e}
	I0116 03:13:08.742235 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | domain old-k8s-version-788237 has defined IP address 192.168.39.91 and MAC address 52:54:00:64:b7:2e in network mk-old-k8s-version-788237
	I0116 03:13:08.742479 1011681 profile.go:148] Saving config to /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/old-k8s-version-788237/config.json ...
	I0116 03:13:08.742682 1011681 machine.go:88] provisioning docker machine ...
	I0116 03:13:08.742701 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .DriverName
	I0116 03:13:08.742937 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetMachineName
	I0116 03:13:08.743154 1011681 buildroot.go:166] provisioning hostname "old-k8s-version-788237"
	I0116 03:13:08.743184 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetMachineName
	I0116 03:13:08.743338 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetSSHHostname
	I0116 03:13:08.745489 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | domain old-k8s-version-788237 has defined MAC address 52:54:00:64:b7:2e in network mk-old-k8s-version-788237
	I0116 03:13:08.745856 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:b7:2e", ip: ""} in network mk-old-k8s-version-788237: {Iface:virbr3 ExpiryTime:2024-01-16 04:13:01 +0000 UTC Type:0 Mac:52:54:00:64:b7:2e Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:old-k8s-version-788237 Clientid:01:52:54:00:64:b7:2e}
	I0116 03:13:08.745897 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | domain old-k8s-version-788237 has defined IP address 192.168.39.91 and MAC address 52:54:00:64:b7:2e in network mk-old-k8s-version-788237
	I0116 03:13:08.746073 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetSSHPort
	I0116 03:13:08.746292 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetSSHKeyPath
	I0116 03:13:08.746426 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetSSHKeyPath
	I0116 03:13:08.746580 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetSSHUsername
	I0116 03:13:08.746791 1011681 main.go:141] libmachine: Using SSH client type: native
	I0116 03:13:08.747298 1011681 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.91 22 <nil> <nil>}
	I0116 03:13:08.747322 1011681 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-788237 && echo "old-k8s-version-788237" | sudo tee /etc/hostname
	I0116 03:13:08.878928 1011681 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-788237
	
	I0116 03:13:08.878966 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetSSHHostname
	I0116 03:13:08.882019 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | domain old-k8s-version-788237 has defined MAC address 52:54:00:64:b7:2e in network mk-old-k8s-version-788237
	I0116 03:13:08.882417 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:b7:2e", ip: ""} in network mk-old-k8s-version-788237: {Iface:virbr3 ExpiryTime:2024-01-16 04:13:01 +0000 UTC Type:0 Mac:52:54:00:64:b7:2e Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:old-k8s-version-788237 Clientid:01:52:54:00:64:b7:2e}
	I0116 03:13:08.882468 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | domain old-k8s-version-788237 has defined IP address 192.168.39.91 and MAC address 52:54:00:64:b7:2e in network mk-old-k8s-version-788237
	I0116 03:13:08.882564 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetSSHPort
	I0116 03:13:08.882806 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetSSHKeyPath
	I0116 03:13:08.883022 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetSSHKeyPath
	I0116 03:13:08.883202 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetSSHUsername
	I0116 03:13:08.883384 1011681 main.go:141] libmachine: Using SSH client type: native
	I0116 03:13:08.883704 1011681 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.91 22 <nil> <nil>}
	I0116 03:13:08.883723 1011681 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-788237' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-788237/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-788237' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0116 03:13:09.011161 1011681 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0116 03:13:09.011209 1011681 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17967-971255/.minikube CaCertPath:/home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17967-971255/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17967-971255/.minikube}
	I0116 03:13:09.011245 1011681 buildroot.go:174] setting up certificates
	I0116 03:13:09.011261 1011681 provision.go:83] configureAuth start
	I0116 03:13:09.011275 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetMachineName
	I0116 03:13:09.011649 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetIP
	I0116 03:13:09.014580 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | domain old-k8s-version-788237 has defined MAC address 52:54:00:64:b7:2e in network mk-old-k8s-version-788237
	I0116 03:13:09.014920 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:b7:2e", ip: ""} in network mk-old-k8s-version-788237: {Iface:virbr3 ExpiryTime:2024-01-16 04:13:01 +0000 UTC Type:0 Mac:52:54:00:64:b7:2e Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:old-k8s-version-788237 Clientid:01:52:54:00:64:b7:2e}
	I0116 03:13:09.014954 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | domain old-k8s-version-788237 has defined IP address 192.168.39.91 and MAC address 52:54:00:64:b7:2e in network mk-old-k8s-version-788237
	I0116 03:13:09.015107 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetSSHHostname
	I0116 03:13:09.017381 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | domain old-k8s-version-788237 has defined MAC address 52:54:00:64:b7:2e in network mk-old-k8s-version-788237
	I0116 03:13:09.017701 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:b7:2e", ip: ""} in network mk-old-k8s-version-788237: {Iface:virbr3 ExpiryTime:2024-01-16 04:13:01 +0000 UTC Type:0 Mac:52:54:00:64:b7:2e Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:old-k8s-version-788237 Clientid:01:52:54:00:64:b7:2e}
	I0116 03:13:09.017731 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | domain old-k8s-version-788237 has defined IP address 192.168.39.91 and MAC address 52:54:00:64:b7:2e in network mk-old-k8s-version-788237
	I0116 03:13:09.017854 1011681 provision.go:138] copyHostCerts
	I0116 03:13:09.017937 1011681 exec_runner.go:144] found /home/jenkins/minikube-integration/17967-971255/.minikube/ca.pem, removing ...
	I0116 03:13:09.017951 1011681 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17967-971255/.minikube/ca.pem
	I0116 03:13:09.018028 1011681 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17967-971255/.minikube/ca.pem (1082 bytes)
	I0116 03:13:09.018175 1011681 exec_runner.go:144] found /home/jenkins/minikube-integration/17967-971255/.minikube/cert.pem, removing ...
	I0116 03:13:09.018190 1011681 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17967-971255/.minikube/cert.pem
	I0116 03:13:09.018223 1011681 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17967-971255/.minikube/cert.pem (1123 bytes)
	I0116 03:13:09.018307 1011681 exec_runner.go:144] found /home/jenkins/minikube-integration/17967-971255/.minikube/key.pem, removing ...
	I0116 03:13:09.018318 1011681 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17967-971255/.minikube/key.pem
	I0116 03:13:09.018342 1011681 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17967-971255/.minikube/key.pem (1675 bytes)
	I0116 03:13:09.018403 1011681 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17967-971255/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-788237 san=[192.168.39.91 192.168.39.91 localhost 127.0.0.1 minikube old-k8s-version-788237]
	I0116 03:13:09.280154 1011681 provision.go:172] copyRemoteCerts
	I0116 03:13:09.280224 1011681 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0116 03:13:09.280252 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetSSHHostname
	I0116 03:13:09.283485 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | domain old-k8s-version-788237 has defined MAC address 52:54:00:64:b7:2e in network mk-old-k8s-version-788237
	I0116 03:13:09.283829 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:b7:2e", ip: ""} in network mk-old-k8s-version-788237: {Iface:virbr3 ExpiryTime:2024-01-16 04:13:01 +0000 UTC Type:0 Mac:52:54:00:64:b7:2e Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:old-k8s-version-788237 Clientid:01:52:54:00:64:b7:2e}
	I0116 03:13:09.283862 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | domain old-k8s-version-788237 has defined IP address 192.168.39.91 and MAC address 52:54:00:64:b7:2e in network mk-old-k8s-version-788237
	I0116 03:13:09.284193 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetSSHPort
	I0116 03:13:09.284454 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetSSHKeyPath
	I0116 03:13:09.284599 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetSSHUsername
	I0116 03:13:09.284787 1011681 sshutil.go:53] new ssh client: &{IP:192.168.39.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/old-k8s-version-788237/id_rsa Username:docker}
	I0116 03:13:09.382440 1011681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0116 03:13:09.410373 1011681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0116 03:13:09.435625 1011681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0116 03:13:09.460028 1011681 provision.go:86] duration metric: configureAuth took 448.744455ms
	I0116 03:13:09.460066 1011681 buildroot.go:189] setting minikube options for container-runtime
	I0116 03:13:09.460309 1011681 config.go:182] Loaded profile config "old-k8s-version-788237": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0116 03:13:09.460422 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetSSHHostname
	I0116 03:13:09.463079 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | domain old-k8s-version-788237 has defined MAC address 52:54:00:64:b7:2e in network mk-old-k8s-version-788237
	I0116 03:13:09.463354 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:b7:2e", ip: ""} in network mk-old-k8s-version-788237: {Iface:virbr3 ExpiryTime:2024-01-16 04:13:01 +0000 UTC Type:0 Mac:52:54:00:64:b7:2e Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:old-k8s-version-788237 Clientid:01:52:54:00:64:b7:2e}
	I0116 03:13:09.463396 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | domain old-k8s-version-788237 has defined IP address 192.168.39.91 and MAC address 52:54:00:64:b7:2e in network mk-old-k8s-version-788237
	I0116 03:13:09.463526 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetSSHPort
	I0116 03:13:09.463784 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetSSHKeyPath
	I0116 03:13:09.464087 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetSSHKeyPath
	I0116 03:13:09.464272 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetSSHUsername
	I0116 03:13:09.464458 1011681 main.go:141] libmachine: Using SSH client type: native
	I0116 03:13:09.464814 1011681 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.91 22 <nil> <nil>}
	I0116 03:13:09.464838 1011681 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0116 03:13:09.783889 1011681 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0116 03:13:09.783923 1011681 machine.go:91] provisioned docker machine in 1.041225615s
	I0116 03:13:09.783938 1011681 start.go:300] post-start starting for "old-k8s-version-788237" (driver="kvm2")
	I0116 03:13:09.783955 1011681 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0116 03:13:09.783981 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .DriverName
	I0116 03:13:09.784410 1011681 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0116 03:13:09.784452 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetSSHHostname
	I0116 03:13:09.787427 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | domain old-k8s-version-788237 has defined MAC address 52:54:00:64:b7:2e in network mk-old-k8s-version-788237
	I0116 03:13:09.787841 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:b7:2e", ip: ""} in network mk-old-k8s-version-788237: {Iface:virbr3 ExpiryTime:2024-01-16 04:13:01 +0000 UTC Type:0 Mac:52:54:00:64:b7:2e Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:old-k8s-version-788237 Clientid:01:52:54:00:64:b7:2e}
	I0116 03:13:09.787879 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | domain old-k8s-version-788237 has defined IP address 192.168.39.91 and MAC address 52:54:00:64:b7:2e in network mk-old-k8s-version-788237
	I0116 03:13:09.788022 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetSSHPort
	I0116 03:13:09.788233 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetSSHKeyPath
	I0116 03:13:09.788409 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetSSHUsername
	I0116 03:13:09.788566 1011681 sshutil.go:53] new ssh client: &{IP:192.168.39.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/old-k8s-version-788237/id_rsa Username:docker}
	I0116 03:13:09.875964 1011681 ssh_runner.go:195] Run: cat /etc/os-release
	I0116 03:13:09.880665 1011681 info.go:137] Remote host: Buildroot 2021.02.12
	I0116 03:13:09.880700 1011681 filesync.go:126] Scanning /home/jenkins/minikube-integration/17967-971255/.minikube/addons for local assets ...
	I0116 03:13:09.880782 1011681 filesync.go:126] Scanning /home/jenkins/minikube-integration/17967-971255/.minikube/files for local assets ...
	I0116 03:13:09.880879 1011681 filesync.go:149] local asset: /home/jenkins/minikube-integration/17967-971255/.minikube/files/etc/ssl/certs/9784822.pem -> 9784822.pem in /etc/ssl/certs
	I0116 03:13:09.881013 1011681 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0116 03:13:09.890286 1011681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/files/etc/ssl/certs/9784822.pem --> /etc/ssl/certs/9784822.pem (1708 bytes)
	I0116 03:13:09.913554 1011681 start.go:303] post-start completed in 129.596487ms
	I0116 03:13:09.913586 1011681 fix.go:56] fixHost completed within 21.026657085s
	I0116 03:13:09.913610 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetSSHHostname
	I0116 03:13:09.916767 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | domain old-k8s-version-788237 has defined MAC address 52:54:00:64:b7:2e in network mk-old-k8s-version-788237
	I0116 03:13:09.917228 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:b7:2e", ip: ""} in network mk-old-k8s-version-788237: {Iface:virbr3 ExpiryTime:2024-01-16 04:13:01 +0000 UTC Type:0 Mac:52:54:00:64:b7:2e Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:old-k8s-version-788237 Clientid:01:52:54:00:64:b7:2e}
	I0116 03:13:09.917265 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | domain old-k8s-version-788237 has defined IP address 192.168.39.91 and MAC address 52:54:00:64:b7:2e in network mk-old-k8s-version-788237
	I0116 03:13:09.917551 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetSSHPort
	I0116 03:13:09.917759 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetSSHKeyPath
	I0116 03:13:09.918017 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetSSHKeyPath
	I0116 03:13:09.918222 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetSSHUsername
	I0116 03:13:09.918418 1011681 main.go:141] libmachine: Using SSH client type: native
	I0116 03:13:09.918793 1011681 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.91 22 <nil> <nil>}
	I0116 03:13:09.918816 1011681 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0116 03:13:10.035012 1011681 main.go:141] libmachine: SSH cmd err, output: <nil>: 1705374789.980840898
	
	I0116 03:13:10.035040 1011681 fix.go:206] guest clock: 1705374789.980840898
	I0116 03:13:10.035051 1011681 fix.go:219] Guest: 2024-01-16 03:13:09.980840898 +0000 UTC Remote: 2024-01-16 03:13:09.913590445 +0000 UTC m=+289.770143089 (delta=67.250453ms)
	I0116 03:13:10.035083 1011681 fix.go:190] guest clock delta is within tolerance: 67.250453ms
	I0116 03:13:10.035093 1011681 start.go:83] releasing machines lock for "old-k8s-version-788237", held for 21.148206908s
	I0116 03:13:10.035126 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .DriverName
	I0116 03:13:10.035410 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetIP
	I0116 03:13:10.038396 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | domain old-k8s-version-788237 has defined MAC address 52:54:00:64:b7:2e in network mk-old-k8s-version-788237
	I0116 03:13:10.038745 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:b7:2e", ip: ""} in network mk-old-k8s-version-788237: {Iface:virbr3 ExpiryTime:2024-01-16 04:13:01 +0000 UTC Type:0 Mac:52:54:00:64:b7:2e Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:old-k8s-version-788237 Clientid:01:52:54:00:64:b7:2e}
	I0116 03:13:10.038781 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | domain old-k8s-version-788237 has defined IP address 192.168.39.91 and MAC address 52:54:00:64:b7:2e in network mk-old-k8s-version-788237
	I0116 03:13:10.039048 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .DriverName
	I0116 03:13:10.039659 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .DriverName
	I0116 03:13:10.039881 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .DriverName
	I0116 03:13:10.039978 1011681 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0116 03:13:10.040024 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetSSHHostname
	I0116 03:13:10.040135 1011681 ssh_runner.go:195] Run: cat /version.json
	I0116 03:13:10.040160 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetSSHHostname
	I0116 03:13:10.043099 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | domain old-k8s-version-788237 has defined MAC address 52:54:00:64:b7:2e in network mk-old-k8s-version-788237
	I0116 03:13:10.043326 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | domain old-k8s-version-788237 has defined MAC address 52:54:00:64:b7:2e in network mk-old-k8s-version-788237
	I0116 03:13:10.043459 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:b7:2e", ip: ""} in network mk-old-k8s-version-788237: {Iface:virbr3 ExpiryTime:2024-01-16 04:13:01 +0000 UTC Type:0 Mac:52:54:00:64:b7:2e Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:old-k8s-version-788237 Clientid:01:52:54:00:64:b7:2e}
	I0116 03:13:10.043482 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | domain old-k8s-version-788237 has defined IP address 192.168.39.91 and MAC address 52:54:00:64:b7:2e in network mk-old-k8s-version-788237
	I0116 03:13:10.043655 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetSSHPort
	I0116 03:13:10.043756 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:b7:2e", ip: ""} in network mk-old-k8s-version-788237: {Iface:virbr3 ExpiryTime:2024-01-16 04:13:01 +0000 UTC Type:0 Mac:52:54:00:64:b7:2e Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:old-k8s-version-788237 Clientid:01:52:54:00:64:b7:2e}
	I0116 03:13:10.043802 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | domain old-k8s-version-788237 has defined IP address 192.168.39.91 and MAC address 52:54:00:64:b7:2e in network mk-old-k8s-version-788237
	I0116 03:13:10.044001 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetSSHPort
	I0116 03:13:10.044018 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetSSHKeyPath
	I0116 03:13:10.044241 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetSSHKeyPath
	I0116 03:13:10.044249 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetSSHUsername
	I0116 03:13:10.044409 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetSSHUsername
	I0116 03:13:10.044498 1011681 sshutil.go:53] new ssh client: &{IP:192.168.39.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/old-k8s-version-788237/id_rsa Username:docker}
	I0116 03:13:10.044528 1011681 sshutil.go:53] new ssh client: &{IP:192.168.39.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/old-k8s-version-788237/id_rsa Username:docker}
	I0116 03:13:10.131865 1011681 ssh_runner.go:195] Run: systemctl --version
	I0116 03:13:10.160343 1011681 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0116 03:13:10.062248 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .Start
	I0116 03:13:10.062475 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Ensuring networks are active...
	I0116 03:13:10.063470 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Ensuring network default is active
	I0116 03:13:10.063800 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Ensuring network mk-default-k8s-diff-port-775571 is active
	I0116 03:13:10.064263 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Getting domain xml...
	I0116 03:13:10.065010 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Creating domain...
	I0116 03:13:10.316936 1011681 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0116 03:13:10.324330 1011681 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0116 03:13:10.324409 1011681 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0116 03:13:10.343057 1011681 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0116 03:13:10.343090 1011681 start.go:475] detecting cgroup driver to use...
	I0116 03:13:10.343184 1011681 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0116 03:13:10.359325 1011681 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0116 03:13:10.377310 1011681 docker.go:217] disabling cri-docker service (if available) ...
	I0116 03:13:10.377386 1011681 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0116 03:13:10.396512 1011681 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0116 03:13:10.416458 1011681 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0116 03:13:10.540518 1011681 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0116 03:13:10.671885 1011681 docker.go:233] disabling docker service ...
	I0116 03:13:10.672042 1011681 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0116 03:13:10.689182 1011681 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0116 03:13:10.705235 1011681 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0116 03:13:10.826545 1011681 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0116 03:13:10.941453 1011681 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0116 03:13:10.954337 1011681 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0116 03:13:10.974814 1011681 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I0116 03:13:10.974894 1011681 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:13:10.984741 1011681 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0116 03:13:10.984811 1011681 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:13:10.994451 1011681 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:13:11.004459 1011681 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:13:11.014409 1011681 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0116 03:13:11.025057 1011681 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0116 03:13:11.033911 1011681 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0116 03:13:11.034003 1011681 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0116 03:13:11.048044 1011681 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0116 03:13:11.056724 1011681 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0116 03:13:11.180914 1011681 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0116 03:13:11.369876 1011681 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0116 03:13:11.369971 1011681 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0116 03:13:11.375568 1011681 start.go:543] Will wait 60s for crictl version
	I0116 03:13:11.375638 1011681 ssh_runner.go:195] Run: which crictl
	I0116 03:13:11.379992 1011681 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0116 03:13:11.422734 1011681 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0116 03:13:11.422837 1011681 ssh_runner.go:195] Run: crio --version
	I0116 03:13:11.477909 1011681 ssh_runner.go:195] Run: crio --version
	I0116 03:13:11.536220 1011681 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I0116 03:13:08.855145 1011501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:13:09.355119 1011501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:13:09.854553 1011501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:13:09.882463 1011501 api_server.go:72] duration metric: took 2.528495988s to wait for apiserver process to appear ...
	I0116 03:13:09.882491 1011501 api_server.go:88] waiting for apiserver healthz status ...
	I0116 03:13:09.882516 1011501 api_server.go:253] Checking apiserver healthz at https://192.168.61.150:8443/healthz ...
	I0116 03:13:09.883135 1011501 api_server.go:269] stopped: https://192.168.61.150:8443/healthz: Get "https://192.168.61.150:8443/healthz": dial tcp 192.168.61.150:8443: connect: connection refused
	I0116 03:13:10.382909 1011501 api_server.go:253] Checking apiserver healthz at https://192.168.61.150:8443/healthz ...
	I0116 03:13:11.537589 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetIP
	I0116 03:13:11.540815 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | domain old-k8s-version-788237 has defined MAC address 52:54:00:64:b7:2e in network mk-old-k8s-version-788237
	I0116 03:13:11.541169 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:b7:2e", ip: ""} in network mk-old-k8s-version-788237: {Iface:virbr3 ExpiryTime:2024-01-16 04:13:01 +0000 UTC Type:0 Mac:52:54:00:64:b7:2e Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:old-k8s-version-788237 Clientid:01:52:54:00:64:b7:2e}
	I0116 03:13:11.541199 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | domain old-k8s-version-788237 has defined IP address 192.168.39.91 and MAC address 52:54:00:64:b7:2e in network mk-old-k8s-version-788237
	I0116 03:13:11.541459 1011681 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0116 03:13:11.546215 1011681 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 03:13:11.562291 1011681 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0116 03:13:11.562378 1011681 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 03:13:11.603542 1011681 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0116 03:13:11.603627 1011681 ssh_runner.go:195] Run: which lz4
	I0116 03:13:11.607873 1011681 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0116 03:13:11.613536 1011681 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0116 03:13:11.613577 1011681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I0116 03:13:13.454225 1011681 crio.go:444] Took 1.846391 seconds to copy over tarball
	I0116 03:13:13.454334 1011681 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0116 03:13:11.425638 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Waiting to get IP...
	I0116 03:13:11.426748 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | domain default-k8s-diff-port-775571 has defined MAC address 52:54:00:4b:bc:45 in network mk-default-k8s-diff-port-775571
	I0116 03:13:11.427214 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | unable to find current IP address of domain default-k8s-diff-port-775571 in network mk-default-k8s-diff-port-775571
	I0116 03:13:11.427314 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | I0116 03:13:11.427187 1012757 retry.go:31] will retry after 234.45504ms: waiting for machine to come up
	I0116 03:13:11.663924 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | domain default-k8s-diff-port-775571 has defined MAC address 52:54:00:4b:bc:45 in network mk-default-k8s-diff-port-775571
	I0116 03:13:11.664619 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | unable to find current IP address of domain default-k8s-diff-port-775571 in network mk-default-k8s-diff-port-775571
	I0116 03:13:11.664664 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | I0116 03:13:11.664556 1012757 retry.go:31] will retry after 318.711044ms: waiting for machine to come up
	I0116 03:13:11.985398 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | domain default-k8s-diff-port-775571 has defined MAC address 52:54:00:4b:bc:45 in network mk-default-k8s-diff-port-775571
	I0116 03:13:11.985941 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | unable to find current IP address of domain default-k8s-diff-port-775571 in network mk-default-k8s-diff-port-775571
	I0116 03:13:11.985978 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | I0116 03:13:11.985917 1012757 retry.go:31] will retry after 463.405848ms: waiting for machine to come up
	I0116 03:13:12.450776 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | domain default-k8s-diff-port-775571 has defined MAC address 52:54:00:4b:bc:45 in network mk-default-k8s-diff-port-775571
	I0116 03:13:12.451335 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | unable to find current IP address of domain default-k8s-diff-port-775571 in network mk-default-k8s-diff-port-775571
	I0116 03:13:12.451361 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | I0116 03:13:12.451270 1012757 retry.go:31] will retry after 428.299543ms: waiting for machine to come up
	I0116 03:13:12.881383 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | domain default-k8s-diff-port-775571 has defined MAC address 52:54:00:4b:bc:45 in network mk-default-k8s-diff-port-775571
	I0116 03:13:12.881910 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | unable to find current IP address of domain default-k8s-diff-port-775571 in network mk-default-k8s-diff-port-775571
	I0116 03:13:12.881946 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | I0116 03:13:12.881856 1012757 retry.go:31] will retry after 564.023978ms: waiting for machine to come up
	I0116 03:13:13.447917 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | domain default-k8s-diff-port-775571 has defined MAC address 52:54:00:4b:bc:45 in network mk-default-k8s-diff-port-775571
	I0116 03:13:13.448436 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | unable to find current IP address of domain default-k8s-diff-port-775571 in network mk-default-k8s-diff-port-775571
	I0116 03:13:13.448492 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | I0116 03:13:13.448405 1012757 retry.go:31] will retry after 694.298162ms: waiting for machine to come up
	I0116 03:13:14.144469 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | domain default-k8s-diff-port-775571 has defined MAC address 52:54:00:4b:bc:45 in network mk-default-k8s-diff-port-775571
	I0116 03:13:14.145037 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | unable to find current IP address of domain default-k8s-diff-port-775571 in network mk-default-k8s-diff-port-775571
	I0116 03:13:14.145084 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | I0116 03:13:14.144953 1012757 retry.go:31] will retry after 821.505467ms: waiting for machine to come up
	I0116 03:13:14.967941 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | domain default-k8s-diff-port-775571 has defined MAC address 52:54:00:4b:bc:45 in network mk-default-k8s-diff-port-775571
	I0116 03:13:14.968577 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | unable to find current IP address of domain default-k8s-diff-port-775571 in network mk-default-k8s-diff-port-775571
	I0116 03:13:14.968611 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | I0116 03:13:14.968486 1012757 retry.go:31] will retry after 1.079929031s: waiting for machine to come up
	I0116 03:13:14.175997 1011501 api_server.go:279] https://192.168.61.150:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0116 03:13:14.176046 1011501 api_server.go:103] status: https://192.168.61.150:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0116 03:13:14.176064 1011501 api_server.go:253] Checking apiserver healthz at https://192.168.61.150:8443/healthz ...
	I0116 03:13:14.244918 1011501 api_server.go:279] https://192.168.61.150:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0116 03:13:14.244979 1011501 api_server.go:103] status: https://192.168.61.150:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0116 03:13:14.383226 1011501 api_server.go:253] Checking apiserver healthz at https://192.168.61.150:8443/healthz ...
	I0116 03:13:14.390006 1011501 api_server.go:279] https://192.168.61.150:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0116 03:13:14.390047 1011501 api_server.go:103] status: https://192.168.61.150:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0116 03:13:14.883209 1011501 api_server.go:253] Checking apiserver healthz at https://192.168.61.150:8443/healthz ...
	I0116 03:13:14.889127 1011501 api_server.go:279] https://192.168.61.150:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0116 03:13:14.889170 1011501 api_server.go:103] status: https://192.168.61.150:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0116 03:13:15.382688 1011501 api_server.go:253] Checking apiserver healthz at https://192.168.61.150:8443/healthz ...
	I0116 03:13:15.399515 1011501 api_server.go:279] https://192.168.61.150:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0116 03:13:15.399554 1011501 api_server.go:103] status: https://192.168.61.150:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0116 03:13:15.883088 1011501 api_server.go:253] Checking apiserver healthz at https://192.168.61.150:8443/healthz ...
	I0116 03:13:15.891853 1011501 api_server.go:279] https://192.168.61.150:8443/healthz returned 200:
	ok
	I0116 03:13:15.905636 1011501 api_server.go:141] control plane version: v1.28.4
	I0116 03:13:15.905683 1011501 api_server.go:131] duration metric: took 6.023183183s to wait for apiserver health ...
	I0116 03:13:15.905697 1011501 cni.go:84] Creating CNI manager for ""
	I0116 03:13:15.905706 1011501 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 03:13:15.907935 1011501 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0116 03:13:15.909466 1011501 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0116 03:13:15.922375 1011501 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0116 03:13:15.952930 1011501 system_pods.go:43] waiting for kube-system pods to appear ...
	I0116 03:13:15.964437 1011501 system_pods.go:59] 8 kube-system pods found
	I0116 03:13:15.964485 1011501 system_pods.go:61] "coredns-5dd5756b68-stqh5" [adbcef96-218b-42ed-9daf-72c274be0690] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0116 03:13:15.964494 1011501 system_pods.go:61] "etcd-embed-certs-480663" [6694af11-3b1a-4b84-adcb-3416b87a076f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0116 03:13:15.964502 1011501 system_pods.go:61] "kube-apiserver-embed-certs-480663" [3a3d0e5d-35eb-4f2f-9686-b17af19bc777] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0116 03:13:15.964508 1011501 system_pods.go:61] "kube-controller-manager-embed-certs-480663" [729b671f-23b9-409b-9f11-d6992f2355c7] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0116 03:13:15.964514 1011501 system_pods.go:61] "kube-proxy-j4786" [aabb98a7-fe55-4105-a5d2-c1e312464107] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0116 03:13:15.964520 1011501 system_pods.go:61] "kube-scheduler-embed-certs-480663" [11baa5d7-6e7b-4a25-be6f-e7975b51023d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0116 03:13:15.964525 1011501 system_pods.go:61] "metrics-server-57f55c9bc5-7d2fh" [512cf579-f335-4995-8721-74bb84da776e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:13:15.964541 1011501 system_pods.go:61] "storage-provisioner" [da59ff59-869f-48a9-a5c5-c95bb807cbcf] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0116 03:13:15.964549 1011501 system_pods.go:74] duration metric: took 11.584104ms to wait for pod list to return data ...
	I0116 03:13:15.964560 1011501 node_conditions.go:102] verifying NodePressure condition ...
	I0116 03:13:15.971265 1011501 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 03:13:15.971310 1011501 node_conditions.go:123] node cpu capacity is 2
	I0116 03:13:15.971324 1011501 node_conditions.go:105] duration metric: took 6.758143ms to run NodePressure ...
	I0116 03:13:15.971346 1011501 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:13:16.332558 1011501 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0116 03:13:16.343354 1011501 kubeadm.go:787] kubelet initialised
	I0116 03:13:16.343392 1011501 kubeadm.go:788] duration metric: took 10.793951ms waiting for restarted kubelet to initialise ...
	I0116 03:13:16.343403 1011501 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:13:16.370777 1011501 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-stqh5" in "kube-system" namespace to be "Ready" ...
	I0116 03:13:16.393556 1011501 pod_ready.go:97] node "embed-certs-480663" hosting pod "coredns-5dd5756b68-stqh5" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-480663" has status "Ready":"False"
	I0116 03:13:16.393599 1011501 pod_ready.go:81] duration metric: took 22.772202ms waiting for pod "coredns-5dd5756b68-stqh5" in "kube-system" namespace to be "Ready" ...
	E0116 03:13:16.393613 1011501 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-480663" hosting pod "coredns-5dd5756b68-stqh5" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-480663" has status "Ready":"False"
	I0116 03:13:16.393622 1011501 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-480663" in "kube-system" namespace to be "Ready" ...
	I0116 03:13:16.410313 1011501 pod_ready.go:97] node "embed-certs-480663" hosting pod "etcd-embed-certs-480663" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-480663" has status "Ready":"False"
	I0116 03:13:16.410355 1011501 pod_ready.go:81] duration metric: took 16.72056ms waiting for pod "etcd-embed-certs-480663" in "kube-system" namespace to be "Ready" ...
	E0116 03:13:16.410371 1011501 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-480663" hosting pod "etcd-embed-certs-480663" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-480663" has status "Ready":"False"
	I0116 03:13:16.410380 1011501 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-480663" in "kube-system" namespace to be "Ready" ...
	I0116 03:13:16.422777 1011501 pod_ready.go:97] node "embed-certs-480663" hosting pod "kube-apiserver-embed-certs-480663" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-480663" has status "Ready":"False"
	I0116 03:13:16.422819 1011501 pod_ready.go:81] duration metric: took 12.426537ms waiting for pod "kube-apiserver-embed-certs-480663" in "kube-system" namespace to be "Ready" ...
	E0116 03:13:16.422834 1011501 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-480663" hosting pod "kube-apiserver-embed-certs-480663" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-480663" has status "Ready":"False"
	I0116 03:13:16.422843 1011501 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-480663" in "kube-system" namespace to be "Ready" ...
	I0116 03:13:16.434722 1011501 pod_ready.go:97] node "embed-certs-480663" hosting pod "kube-controller-manager-embed-certs-480663" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-480663" has status "Ready":"False"
	I0116 03:13:16.434760 1011501 pod_ready.go:81] duration metric: took 11.904523ms waiting for pod "kube-controller-manager-embed-certs-480663" in "kube-system" namespace to be "Ready" ...
	E0116 03:13:16.434773 1011501 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-480663" hosting pod "kube-controller-manager-embed-certs-480663" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-480663" has status "Ready":"False"
	I0116 03:13:16.434783 1011501 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-j4786" in "kube-system" namespace to be "Ready" ...
	I0116 03:13:17.092534 1011501 pod_ready.go:97] node "embed-certs-480663" hosting pod "kube-proxy-j4786" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-480663" has status "Ready":"False"
	I0116 03:13:17.092568 1011501 pod_ready.go:81] duration metric: took 657.773691ms waiting for pod "kube-proxy-j4786" in "kube-system" namespace to be "Ready" ...
	E0116 03:13:17.092581 1011501 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-480663" hosting pod "kube-proxy-j4786" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-480663" has status "Ready":"False"
	I0116 03:13:17.092590 1011501 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-480663" in "kube-system" namespace to be "Ready" ...
	I0116 03:13:17.158257 1011501 pod_ready.go:97] node "embed-certs-480663" hosting pod "kube-scheduler-embed-certs-480663" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-480663" has status "Ready":"False"
	I0116 03:13:17.158294 1011501 pod_ready.go:81] duration metric: took 65.69466ms waiting for pod "kube-scheduler-embed-certs-480663" in "kube-system" namespace to be "Ready" ...
	E0116 03:13:17.158308 1011501 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-480663" hosting pod "kube-scheduler-embed-certs-480663" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-480663" has status "Ready":"False"
	I0116 03:13:17.158317 1011501 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace to be "Ready" ...
	I0116 03:13:17.872108 1011501 pod_ready.go:97] node "embed-certs-480663" hosting pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-480663" has status "Ready":"False"
	I0116 03:13:17.872149 1011501 pod_ready.go:81] duration metric: took 713.820621ms waiting for pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace to be "Ready" ...
	E0116 03:13:17.872162 1011501 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-480663" hosting pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-480663" has status "Ready":"False"
	I0116 03:13:17.872171 1011501 pod_ready.go:38] duration metric: took 1.528756103s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:13:17.872202 1011501 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0116 03:13:17.890580 1011501 ops.go:34] apiserver oom_adj: -16
	I0116 03:13:17.890613 1011501 kubeadm.go:640] restartCluster took 21.725155834s
	I0116 03:13:17.890626 1011501 kubeadm.go:406] StartCluster complete in 21.784958156s
	I0116 03:13:17.890693 1011501 settings.go:142] acquiring lock: {Name:mk39c69fb5454efd5afab021c02c3bec1f1b4e6b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:13:17.890792 1011501 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17967-971255/kubeconfig
	I0116 03:13:17.893858 1011501 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17967-971255/kubeconfig: {Name:mk5ca3018274b0a03599cf3631785c0629f81a67 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:13:18.133588 1011501 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0116 03:13:18.133712 1011501 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0116 03:13:18.133875 1011501 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-480663"
	I0116 03:13:18.133878 1011501 addons.go:69] Setting metrics-server=true in profile "embed-certs-480663"
	I0116 03:13:18.133911 1011501 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-480663"
	I0116 03:13:18.133906 1011501 addons.go:69] Setting default-storageclass=true in profile "embed-certs-480663"
	I0116 03:13:18.133920 1011501 addons.go:234] Setting addon metrics-server=true in "embed-certs-480663"
	W0116 03:13:18.133924 1011501 addons.go:243] addon storage-provisioner should already be in state true
	W0116 03:13:18.133932 1011501 addons.go:243] addon metrics-server should already be in state true
	I0116 03:13:18.133939 1011501 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-480663"
	I0116 03:13:18.133951 1011501 config.go:182] Loaded profile config "embed-certs-480663": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 03:13:18.133990 1011501 host.go:66] Checking if "embed-certs-480663" exists ...
	I0116 03:13:18.133990 1011501 host.go:66] Checking if "embed-certs-480663" exists ...
	I0116 03:13:18.134422 1011501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:13:18.134435 1011501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:13:18.134441 1011501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:13:18.134458 1011501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:13:18.134482 1011501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:13:18.134496 1011501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:13:18.152772 1011501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36843
	I0116 03:13:18.153335 1011501 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:13:18.153822 1011501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36539
	I0116 03:13:18.153952 1011501 main.go:141] libmachine: Using API Version  1
	I0116 03:13:18.153978 1011501 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:13:18.153953 1011501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44719
	I0116 03:13:18.154272 1011501 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:13:18.154435 1011501 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:13:18.154637 1011501 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:13:18.154836 1011501 main.go:141] libmachine: Using API Version  1
	I0116 03:13:18.154860 1011501 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:13:18.154956 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetState
	I0116 03:13:18.155092 1011501 main.go:141] libmachine: Using API Version  1
	I0116 03:13:18.155118 1011501 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:13:18.155183 1011501 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:13:18.155408 1011501 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:13:18.155884 1011501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:13:18.155939 1011501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:13:18.155953 1011501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:13:18.155985 1011501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:13:18.159097 1011501 addons.go:234] Setting addon default-storageclass=true in "embed-certs-480663"
	W0116 03:13:18.159139 1011501 addons.go:243] addon default-storageclass should already be in state true
	I0116 03:13:18.159175 1011501 host.go:66] Checking if "embed-certs-480663" exists ...
	I0116 03:13:18.159631 1011501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:13:18.159709 1011501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:13:18.176336 1011501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34615
	I0116 03:13:18.177044 1011501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35519
	I0116 03:13:18.177237 1011501 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:13:18.177646 1011501 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:13:18.177946 1011501 main.go:141] libmachine: Using API Version  1
	I0116 03:13:18.177971 1011501 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:13:18.178455 1011501 main.go:141] libmachine: Using API Version  1
	I0116 03:13:18.178505 1011501 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:13:18.178538 1011501 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:13:18.178951 1011501 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:13:18.178981 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetState
	I0116 03:13:18.179150 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetState
	I0116 03:13:18.179705 1011501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34121
	I0116 03:13:18.180094 1011501 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:13:18.180921 1011501 main.go:141] libmachine: Using API Version  1
	I0116 03:13:18.180934 1011501 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:13:18.181286 1011501 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:13:18.181902 1011501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:13:18.181925 1011501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:13:18.182091 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .DriverName
	I0116 03:13:18.182301 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .DriverName
	I0116 03:13:18.302482 1011501 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 03:13:18.202219 1011501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40999
	I0116 03:13:18.581432 1011501 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0116 03:13:18.581416 1011501 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 03:13:18.709000 1011501 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0116 03:13:18.582081 1011501 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:13:18.709096 1011501 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0116 03:13:18.709126 1011501 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0116 03:13:18.709154 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetSSHHostname
	I0116 03:13:18.586643 1011501 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-480663" context rescaled to 1 replicas
	I0116 03:13:18.709184 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetSSHHostname
	I0116 03:13:18.709223 1011501 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.150 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0116 03:13:18.588936 1011501 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0116 03:13:18.709955 1011501 main.go:141] libmachine: Using API Version  1
	I0116 03:13:18.713092 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | domain embed-certs-480663 has defined MAC address 52:54:00:1c:0e:bd in network mk-embed-certs-480663
	I0116 03:13:18.713501 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | domain embed-certs-480663 has defined MAC address 52:54:00:1c:0e:bd in network mk-embed-certs-480663
	I0116 03:13:18.713740 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetSSHPort
	I0116 03:13:18.714270 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetSSHPort
	I0116 03:13:18.722911 1011501 out.go:177] * Verifying Kubernetes components...
	I0116 03:13:18.722952 1011501 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:13:18.723026 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:0e:bd", ip: ""} in network mk-embed-certs-480663: {Iface:virbr1 ExpiryTime:2024-01-16 04:04:18 +0000 UTC Type:0 Mac:52:54:00:1c:0e:bd Iaid: IPaddr:192.168.61.150 Prefix:24 Hostname:embed-certs-480663 Clientid:01:52:54:00:1c:0e:bd}
	I0116 03:13:18.723078 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:0e:bd", ip: ""} in network mk-embed-certs-480663: {Iface:virbr1 ExpiryTime:2024-01-16 04:04:18 +0000 UTC Type:0 Mac:52:54:00:1c:0e:bd Iaid: IPaddr:192.168.61.150 Prefix:24 Hostname:embed-certs-480663 Clientid:01:52:54:00:1c:0e:bd}
	I0116 03:13:18.724877 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | domain embed-certs-480663 has defined IP address 192.168.61.150 and MAC address 52:54:00:1c:0e:bd in network mk-embed-certs-480663
	I0116 03:13:18.723318 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetSSHKeyPath
	I0116 03:13:18.724891 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | domain embed-certs-480663 has defined IP address 192.168.61.150 and MAC address 52:54:00:1c:0e:bd in network mk-embed-certs-480663
	I0116 03:13:18.723318 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetSSHKeyPath
	I0116 03:13:18.724748 1011501 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 03:13:18.725164 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetSSHUsername
	I0116 03:13:18.725165 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetSSHUsername
	I0116 03:13:18.725281 1011501 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:13:18.725333 1011501 sshutil.go:53] new ssh client: &{IP:192.168.61.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/embed-certs-480663/id_rsa Username:docker}
	I0116 03:13:18.725384 1011501 sshutil.go:53] new ssh client: &{IP:192.168.61.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/embed-certs-480663/id_rsa Username:docker}
	I0116 03:13:18.725507 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetState
	I0116 03:13:18.727468 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .DriverName
	I0116 03:13:18.727734 1011501 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0116 03:13:18.727754 1011501 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0116 03:13:18.727774 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetSSHHostname
	I0116 03:13:18.730959 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | domain embed-certs-480663 has defined MAC address 52:54:00:1c:0e:bd in network mk-embed-certs-480663
	I0116 03:13:18.731419 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:0e:bd", ip: ""} in network mk-embed-certs-480663: {Iface:virbr1 ExpiryTime:2024-01-16 04:04:18 +0000 UTC Type:0 Mac:52:54:00:1c:0e:bd Iaid: IPaddr:192.168.61.150 Prefix:24 Hostname:embed-certs-480663 Clientid:01:52:54:00:1c:0e:bd}
	I0116 03:13:18.731488 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | domain embed-certs-480663 has defined IP address 192.168.61.150 and MAC address 52:54:00:1c:0e:bd in network mk-embed-certs-480663
	I0116 03:13:18.731819 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetSSHPort
	I0116 03:13:18.732013 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetSSHKeyPath
	I0116 03:13:18.732162 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetSSHUsername
	I0116 03:13:18.732328 1011501 sshutil.go:53] new ssh client: &{IP:192.168.61.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/embed-certs-480663/id_rsa Username:docker}
	I0116 03:13:18.750255 1011501 node_ready.go:35] waiting up to 6m0s for node "embed-certs-480663" to be "Ready" ...
	I0116 03:13:16.997115 1011681 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.542741465s)
	I0116 03:13:16.997156 1011681 crio.go:451] Took 3.542892 seconds to extract the tarball
	I0116 03:13:16.997169 1011681 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0116 03:13:17.046929 1011681 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 03:13:17.098255 1011681 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0116 03:13:17.098280 1011681 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0116 03:13:17.098386 1011681 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 03:13:17.098392 1011681 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0116 03:13:17.098461 1011681 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0116 03:13:17.098503 1011681 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0116 03:13:17.098391 1011681 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0116 03:13:17.098621 1011681 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0116 03:13:17.098462 1011681 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0116 03:13:17.098390 1011681 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0116 03:13:17.100000 1011681 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0116 03:13:17.100009 1011681 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0116 03:13:17.100019 1011681 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0116 03:13:17.100039 1011681 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0116 03:13:17.100005 1011681 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 03:13:17.100438 1011681 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0116 03:13:17.100461 1011681 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0116 03:13:17.100666 1011681 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0116 03:13:17.256272 1011681 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0116 03:13:17.256286 1011681 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0116 03:13:17.258442 1011681 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0116 03:13:17.259457 1011681 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0116 03:13:17.264044 1011681 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0116 03:13:17.267216 1011681 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0116 03:13:17.274663 1011681 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0116 03:13:17.423339 1011681 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 03:13:17.423697 1011681 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0116 03:13:17.423773 1011681 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I0116 03:13:17.423813 1011681 ssh_runner.go:195] Run: which crictl
	I0116 03:13:17.460324 1011681 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0116 03:13:17.460382 1011681 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I0116 03:13:17.460441 1011681 ssh_runner.go:195] Run: which crictl
	I0116 03:13:17.483883 1011681 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0116 03:13:17.483936 1011681 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0116 03:13:17.483999 1011681 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0116 03:13:17.484066 1011681 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0116 03:13:17.484087 1011681 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0116 03:13:17.484104 1011681 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0116 03:13:17.484135 1011681 ssh_runner.go:195] Run: which crictl
	I0116 03:13:17.484007 1011681 ssh_runner.go:195] Run: which crictl
	I0116 03:13:17.484144 1011681 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0116 03:13:17.484142 1011681 ssh_runner.go:195] Run: which crictl
	I0116 03:13:17.484166 1011681 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0116 03:13:17.484211 1011681 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0116 03:13:17.484237 1011681 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0116 03:13:17.484284 1011681 ssh_runner.go:195] Run: which crictl
	I0116 03:13:17.484243 1011681 ssh_runner.go:195] Run: which crictl
	I0116 03:13:17.613454 1011681 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I0116 03:13:17.613555 1011681 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I0116 03:13:17.613587 1011681 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0116 03:13:17.613625 1011681 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I0116 03:13:17.613651 1011681 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0116 03:13:17.613689 1011681 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I0116 03:13:17.613759 1011681 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0116 03:13:17.776287 1011681 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17967-971255/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0116 03:13:17.787958 1011681 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17967-971255/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0116 03:13:17.788016 1011681 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17967-971255/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0116 03:13:17.788096 1011681 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17967-971255/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0116 03:13:17.791623 1011681 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17967-971255/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0116 03:13:17.791754 1011681 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17967-971255/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0116 03:13:17.791815 1011681 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17967-971255/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0116 03:13:17.791858 1011681 cache_images.go:92] LoadImages completed in 693.564709ms
	W0116 03:13:17.791955 1011681 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17967-971255/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1: no such file or directory
	I0116 03:13:17.792040 1011681 ssh_runner.go:195] Run: crio config
	I0116 03:13:17.851037 1011681 cni.go:84] Creating CNI manager for ""
	I0116 03:13:17.851066 1011681 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 03:13:17.851109 1011681 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0116 03:13:17.851136 1011681 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.91 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-788237 NodeName:old-k8s-version-788237 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.91"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.91 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0116 03:13:17.851281 1011681 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.91
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-788237"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.91
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.91"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-788237
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.39.91:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0116 03:13:17.851355 1011681 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-788237 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.91
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-788237 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0116 03:13:17.851419 1011681 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0116 03:13:17.861305 1011681 binaries.go:44] Found k8s binaries, skipping transfer
	I0116 03:13:17.861416 1011681 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0116 03:13:17.871242 1011681 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0116 03:13:17.891002 1011681 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0116 03:13:17.908934 1011681 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2177 bytes)
	I0116 03:13:17.928274 1011681 ssh_runner.go:195] Run: grep 192.168.39.91	control-plane.minikube.internal$ /etc/hosts
	I0116 03:13:17.932258 1011681 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.91	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 03:13:17.947070 1011681 certs.go:56] Setting up /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/old-k8s-version-788237 for IP: 192.168.39.91
	I0116 03:13:17.947119 1011681 certs.go:190] acquiring lock for shared ca certs: {Name:mk7c5920652340a030ac1e36e0fe21cc8437c491 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:13:17.947316 1011681 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17967-971255/.minikube/ca.key
	I0116 03:13:17.947374 1011681 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17967-971255/.minikube/proxy-client-ca.key
	I0116 03:13:17.947476 1011681 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/old-k8s-version-788237/client.key
	I0116 03:13:18.133447 1011681 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/old-k8s-version-788237/apiserver.key.d2754551
	I0116 03:13:18.133566 1011681 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/old-k8s-version-788237/proxy-client.key
	I0116 03:13:18.133765 1011681 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/home/jenkins/minikube-integration/17967-971255/.minikube/certs/978482.pem (1338 bytes)
	W0116 03:13:18.133860 1011681 certs.go:433] ignoring /home/jenkins/minikube-integration/17967-971255/.minikube/certs/home/jenkins/minikube-integration/17967-971255/.minikube/certs/978482_empty.pem, impossibly tiny 0 bytes
	I0116 03:13:18.133884 1011681 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca-key.pem (1675 bytes)
	I0116 03:13:18.133951 1011681 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca.pem (1082 bytes)
	I0116 03:13:18.133988 1011681 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/home/jenkins/minikube-integration/17967-971255/.minikube/certs/cert.pem (1123 bytes)
	I0116 03:13:18.134018 1011681 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/home/jenkins/minikube-integration/17967-971255/.minikube/certs/key.pem (1675 bytes)
	I0116 03:13:18.134075 1011681 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-971255/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17967-971255/.minikube/files/etc/ssl/certs/9784822.pem (1708 bytes)
	I0116 03:13:18.135047 1011681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/old-k8s-version-788237/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0116 03:13:18.169653 1011681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/old-k8s-version-788237/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0116 03:13:18.203412 1011681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/old-k8s-version-788237/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0116 03:13:18.232247 1011681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/old-k8s-version-788237/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0116 03:13:18.264379 1011681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0116 03:13:18.293926 1011681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0116 03:13:18.320373 1011681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0116 03:13:18.345098 1011681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0116 03:13:18.375186 1011681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/certs/978482.pem --> /usr/share/ca-certificates/978482.pem (1338 bytes)
	I0116 03:13:18.400408 1011681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/files/etc/ssl/certs/9784822.pem --> /usr/share/ca-certificates/9784822.pem (1708 bytes)
	I0116 03:13:18.426138 1011681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0116 03:13:18.451943 1011681 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0116 03:13:18.470682 1011681 ssh_runner.go:195] Run: openssl version
	I0116 03:13:18.477291 1011681 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/978482.pem && ln -fs /usr/share/ca-certificates/978482.pem /etc/ssl/certs/978482.pem"
	I0116 03:13:18.487687 1011681 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/978482.pem
	I0116 03:13:18.492346 1011681 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 16 02:10 /usr/share/ca-certificates/978482.pem
	I0116 03:13:18.492438 1011681 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/978482.pem
	I0116 03:13:18.498376 1011681 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/978482.pem /etc/ssl/certs/51391683.0"
	I0116 03:13:18.509157 1011681 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9784822.pem && ln -fs /usr/share/ca-certificates/9784822.pem /etc/ssl/certs/9784822.pem"
	I0116 03:13:18.520433 1011681 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9784822.pem
	I0116 03:13:18.525633 1011681 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 16 02:10 /usr/share/ca-certificates/9784822.pem
	I0116 03:13:18.525708 1011681 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9784822.pem
	I0116 03:13:18.531567 1011681 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9784822.pem /etc/ssl/certs/3ec20f2e.0"
	I0116 03:13:18.542827 1011681 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0116 03:13:18.553440 1011681 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:13:18.558572 1011681 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 16 02:01 /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:13:18.558647 1011681 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:13:18.564459 1011681 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0116 03:13:18.575413 1011681 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0116 03:13:18.580317 1011681 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0116 03:13:18.589623 1011681 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0116 03:13:18.598327 1011681 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0116 03:13:18.604540 1011681 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0116 03:13:18.610538 1011681 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0116 03:13:18.616482 1011681 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0116 03:13:18.622438 1011681 kubeadm.go:404] StartCluster: {Name:old-k8s-version-788237 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-788237 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.91 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 03:13:18.622565 1011681 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0116 03:13:18.622638 1011681 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 03:13:18.662697 1011681 cri.go:89] found id: ""
	I0116 03:13:18.662794 1011681 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0116 03:13:18.673299 1011681 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0116 03:13:18.673328 1011681 kubeadm.go:636] restartCluster start
	I0116 03:13:18.673404 1011681 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0116 03:13:18.683191 1011681 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:18.684893 1011681 kubeconfig.go:92] found "old-k8s-version-788237" server: "https://192.168.39.91:8443"
	I0116 03:13:18.688339 1011681 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0116 03:13:18.699684 1011681 api_server.go:166] Checking apiserver status ...
	I0116 03:13:18.699763 1011681 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:18.714966 1011681 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:19.200230 1011681 api_server.go:166] Checking apiserver status ...
	I0116 03:13:19.200346 1011681 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:19.216711 1011681 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:19.699865 1011681 api_server.go:166] Checking apiserver status ...
	I0116 03:13:19.699968 1011681 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:19.717864 1011681 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:20.200734 1011681 api_server.go:166] Checking apiserver status ...
	I0116 03:13:20.200839 1011681 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:13:16.049953 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | domain default-k8s-diff-port-775571 has defined MAC address 52:54:00:4b:bc:45 in network mk-default-k8s-diff-port-775571
	I0116 03:13:16.050440 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | unable to find current IP address of domain default-k8s-diff-port-775571 in network mk-default-k8s-diff-port-775571
	I0116 03:13:16.050486 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | I0116 03:13:16.050405 1012757 retry.go:31] will retry after 1.677720431s: waiting for machine to come up
	I0116 03:13:17.729520 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | domain default-k8s-diff-port-775571 has defined MAC address 52:54:00:4b:bc:45 in network mk-default-k8s-diff-port-775571
	I0116 03:13:17.730062 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | unable to find current IP address of domain default-k8s-diff-port-775571 in network mk-default-k8s-diff-port-775571
	I0116 03:13:17.730098 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | I0116 03:13:17.729997 1012757 retry.go:31] will retry after 1.686395601s: waiting for machine to come up
	I0116 03:13:19.419165 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | domain default-k8s-diff-port-775571 has defined MAC address 52:54:00:4b:bc:45 in network mk-default-k8s-diff-port-775571
	I0116 03:13:19.419699 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | unable to find current IP address of domain default-k8s-diff-port-775571 in network mk-default-k8s-diff-port-775571
	I0116 03:13:19.419741 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | I0116 03:13:19.419628 1012757 retry.go:31] will retry after 2.679023059s: waiting for machine to come up
	I0116 03:13:18.844795 1011501 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0116 03:13:18.861175 1011501 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0116 03:13:18.964890 1011501 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0116 03:13:18.862657 1011501 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 03:13:19.005912 1011501 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0116 03:13:19.005941 1011501 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0116 03:13:19.047693 1011501 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0116 03:13:19.047734 1011501 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0116 03:13:19.101576 1011501 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0116 03:13:19.940514 1011501 main.go:141] libmachine: Making call to close driver server
	I0116 03:13:19.940549 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .Close
	I0116 03:13:19.940914 1011501 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:13:19.940941 1011501 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:13:19.940954 1011501 main.go:141] libmachine: Making call to close driver server
	I0116 03:13:19.940965 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .Close
	I0116 03:13:19.941288 1011501 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:13:19.941309 1011501 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:13:19.986987 1011501 main.go:141] libmachine: Making call to close driver server
	I0116 03:13:19.987020 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .Close
	I0116 03:13:19.987375 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | Closing plugin on server side
	I0116 03:13:19.989349 1011501 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:13:19.989375 1011501 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:13:20.550836 1011501 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.449206565s)
	I0116 03:13:20.550903 1011501 main.go:141] libmachine: Making call to close driver server
	I0116 03:13:20.550921 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .Close
	I0116 03:13:20.550961 1011501 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.585981109s)
	I0116 03:13:20.551004 1011501 main.go:141] libmachine: Making call to close driver server
	I0116 03:13:20.551020 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .Close
	I0116 03:13:20.551499 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | Closing plugin on server side
	I0116 03:13:20.551509 1011501 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:13:20.551519 1011501 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:13:20.551564 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | Closing plugin on server side
	I0116 03:13:20.551565 1011501 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:13:20.551604 1011501 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:13:20.551624 1011501 main.go:141] libmachine: Making call to close driver server
	I0116 03:13:20.551610 1011501 main.go:141] libmachine: Making call to close driver server
	I0116 03:13:20.551637 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .Close
	I0116 03:13:20.551654 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .Close
	I0116 03:13:20.551899 1011501 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:13:20.551918 1011501 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:13:20.551975 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | Closing plugin on server side
	I0116 03:13:20.552009 1011501 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:13:20.552027 1011501 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:13:20.552050 1011501 addons.go:470] Verifying addon metrics-server=true in "embed-certs-480663"
	I0116 03:13:20.555953 1011501 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0116 03:13:20.557383 1011501 addons.go:505] enable addons completed in 2.42368035s: enabled=[default-storageclass storage-provisioner metrics-server]
	I0116 03:13:20.756003 1011501 node_ready.go:58] node "embed-certs-480663" has status "Ready":"False"
	I0116 03:13:23.254943 1011501 node_ready.go:58] node "embed-certs-480663" has status "Ready":"False"
	W0116 03:13:20.218633 1011681 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:20.700343 1011681 api_server.go:166] Checking apiserver status ...
	I0116 03:13:20.700461 1011681 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:20.713613 1011681 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:21.200115 1011681 api_server.go:166] Checking apiserver status ...
	I0116 03:13:21.200232 1011681 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:21.214341 1011681 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:21.700520 1011681 api_server.go:166] Checking apiserver status ...
	I0116 03:13:21.700644 1011681 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:21.717190 1011681 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:22.200709 1011681 api_server.go:166] Checking apiserver status ...
	I0116 03:13:22.200870 1011681 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:22.217321 1011681 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:22.699859 1011681 api_server.go:166] Checking apiserver status ...
	I0116 03:13:22.699972 1011681 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:22.717201 1011681 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:23.200594 1011681 api_server.go:166] Checking apiserver status ...
	I0116 03:13:23.200713 1011681 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:23.217126 1011681 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:23.700769 1011681 api_server.go:166] Checking apiserver status ...
	I0116 03:13:23.700891 1011681 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:23.715639 1011681 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:24.200713 1011681 api_server.go:166] Checking apiserver status ...
	I0116 03:13:24.200800 1011681 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:24.216368 1011681 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:24.699816 1011681 api_server.go:166] Checking apiserver status ...
	I0116 03:13:24.699958 1011681 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:24.717041 1011681 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:25.200575 1011681 api_server.go:166] Checking apiserver status ...
	I0116 03:13:25.200673 1011681 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:13:22.100823 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | domain default-k8s-diff-port-775571 has defined MAC address 52:54:00:4b:bc:45 in network mk-default-k8s-diff-port-775571
	I0116 03:13:22.101280 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | unable to find current IP address of domain default-k8s-diff-port-775571 in network mk-default-k8s-diff-port-775571
	I0116 03:13:22.101336 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | I0116 03:13:22.101245 1012757 retry.go:31] will retry after 3.352897115s: waiting for machine to come up
	I0116 03:13:25.456363 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | domain default-k8s-diff-port-775571 has defined MAC address 52:54:00:4b:bc:45 in network mk-default-k8s-diff-port-775571
	I0116 03:13:25.456824 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | unable to find current IP address of domain default-k8s-diff-port-775571 in network mk-default-k8s-diff-port-775571
	I0116 03:13:25.456908 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | I0116 03:13:25.456819 1012757 retry.go:31] will retry after 4.541436356s: waiting for machine to come up
	I0116 03:13:24.754870 1011501 node_ready.go:49] node "embed-certs-480663" has status "Ready":"True"
	I0116 03:13:24.754900 1011501 node_ready.go:38] duration metric: took 6.00460635s waiting for node "embed-certs-480663" to be "Ready" ...
	I0116 03:13:24.754913 1011501 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:13:24.761593 1011501 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-stqh5" in "kube-system" namespace to be "Ready" ...
	I0116 03:13:24.769366 1011501 pod_ready.go:92] pod "coredns-5dd5756b68-stqh5" in "kube-system" namespace has status "Ready":"True"
	I0116 03:13:24.769394 1011501 pod_ready.go:81] duration metric: took 7.773298ms waiting for pod "coredns-5dd5756b68-stqh5" in "kube-system" namespace to be "Ready" ...
	I0116 03:13:24.769407 1011501 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-480663" in "kube-system" namespace to be "Ready" ...
	I0116 03:13:26.782066 1011501 pod_ready.go:92] pod "etcd-embed-certs-480663" in "kube-system" namespace has status "Ready":"True"
	I0116 03:13:26.782105 1011501 pod_ready.go:81] duration metric: took 2.012689692s waiting for pod "etcd-embed-certs-480663" in "kube-system" namespace to be "Ready" ...
	I0116 03:13:26.782119 1011501 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-480663" in "kube-system" namespace to be "Ready" ...
	I0116 03:13:26.792641 1011501 pod_ready.go:92] pod "kube-apiserver-embed-certs-480663" in "kube-system" namespace has status "Ready":"True"
	I0116 03:13:26.792674 1011501 pod_ready.go:81] duration metric: took 10.545313ms waiting for pod "kube-apiserver-embed-certs-480663" in "kube-system" namespace to be "Ready" ...
	I0116 03:13:26.792690 1011501 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-480663" in "kube-system" namespace to be "Ready" ...
	I0116 03:13:26.799734 1011501 pod_ready.go:92] pod "kube-controller-manager-embed-certs-480663" in "kube-system" namespace has status "Ready":"True"
	I0116 03:13:26.799756 1011501 pod_ready.go:81] duration metric: took 7.056918ms waiting for pod "kube-controller-manager-embed-certs-480663" in "kube-system" namespace to be "Ready" ...
	I0116 03:13:26.799765 1011501 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-j4786" in "kube-system" namespace to be "Ready" ...
	I0116 03:13:26.804888 1011501 pod_ready.go:92] pod "kube-proxy-j4786" in "kube-system" namespace has status "Ready":"True"
	I0116 03:13:26.804924 1011501 pod_ready.go:81] duration metric: took 5.151602ms waiting for pod "kube-proxy-j4786" in "kube-system" namespace to be "Ready" ...
	I0116 03:13:26.804937 1011501 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-480663" in "kube-system" namespace to be "Ready" ...
	I0116 03:13:27.954848 1011501 pod_ready.go:92] pod "kube-scheduler-embed-certs-480663" in "kube-system" namespace has status "Ready":"True"
	I0116 03:13:27.954889 1011501 pod_ready.go:81] duration metric: took 1.149940262s waiting for pod "kube-scheduler-embed-certs-480663" in "kube-system" namespace to be "Ready" ...
	I0116 03:13:27.954904 1011501 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace to be "Ready" ...
	W0116 03:13:25.214882 1011681 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:25.700375 1011681 api_server.go:166] Checking apiserver status ...
	I0116 03:13:25.700473 1011681 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:25.713971 1011681 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:26.200077 1011681 api_server.go:166] Checking apiserver status ...
	I0116 03:13:26.200184 1011681 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:26.212440 1011681 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:26.699761 1011681 api_server.go:166] Checking apiserver status ...
	I0116 03:13:26.699855 1011681 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:26.713769 1011681 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:27.200383 1011681 api_server.go:166] Checking apiserver status ...
	I0116 03:13:27.200476 1011681 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:27.212354 1011681 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:27.699854 1011681 api_server.go:166] Checking apiserver status ...
	I0116 03:13:27.699946 1011681 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:27.712542 1011681 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:28.200037 1011681 api_server.go:166] Checking apiserver status ...
	I0116 03:13:28.200144 1011681 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:28.212556 1011681 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:28.700313 1011681 api_server.go:166] Checking apiserver status ...
	I0116 03:13:28.700415 1011681 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:28.712681 1011681 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:28.712718 1011681 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0116 03:13:28.712759 1011681 kubeadm.go:1135] stopping kube-system containers ...
	I0116 03:13:28.712773 1011681 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0116 03:13:28.712840 1011681 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 03:13:28.764021 1011681 cri.go:89] found id: ""
	I0116 03:13:28.764122 1011681 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0116 03:13:28.780410 1011681 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0116 03:13:28.790517 1011681 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0116 03:13:28.790617 1011681 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0116 03:13:28.800491 1011681 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0116 03:13:28.800544 1011681 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:13:28.935606 1011681 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:13:29.805004 1011681 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:13:30.030241 1011681 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:13:30.123106 1011681 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:13:30.003874 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | domain default-k8s-diff-port-775571 has defined MAC address 52:54:00:4b:bc:45 in network mk-default-k8s-diff-port-775571
	I0116 03:13:30.004370 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Found IP for machine: 192.168.72.158
	I0116 03:13:30.004394 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Reserving static IP address...
	I0116 03:13:30.004424 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | domain default-k8s-diff-port-775571 has current primary IP address 192.168.72.158 and MAC address 52:54:00:4b:bc:45 in network mk-default-k8s-diff-port-775571
	I0116 03:13:30.004824 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-775571", mac: "52:54:00:4b:bc:45", ip: "192.168.72.158"} in network mk-default-k8s-diff-port-775571: {Iface:virbr4 ExpiryTime:2024-01-16 04:13:23 +0000 UTC Type:0 Mac:52:54:00:4b:bc:45 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-775571 Clientid:01:52:54:00:4b:bc:45}
	I0116 03:13:30.004853 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | skip adding static IP to network mk-default-k8s-diff-port-775571 - found existing host DHCP lease matching {name: "default-k8s-diff-port-775571", mac: "52:54:00:4b:bc:45", ip: "192.168.72.158"}
	I0116 03:13:30.004868 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Reserved static IP address: 192.168.72.158
	I0116 03:13:30.004888 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Waiting for SSH to be available...
	I0116 03:13:30.004901 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | Getting to WaitForSSH function...
	I0116 03:13:30.007176 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | domain default-k8s-diff-port-775571 has defined MAC address 52:54:00:4b:bc:45 in network mk-default-k8s-diff-port-775571
	I0116 03:13:30.007549 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:bc:45", ip: ""} in network mk-default-k8s-diff-port-775571: {Iface:virbr4 ExpiryTime:2024-01-16 04:13:23 +0000 UTC Type:0 Mac:52:54:00:4b:bc:45 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-775571 Clientid:01:52:54:00:4b:bc:45}
	I0116 03:13:30.007592 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | domain default-k8s-diff-port-775571 has defined IP address 192.168.72.158 and MAC address 52:54:00:4b:bc:45 in network mk-default-k8s-diff-port-775571
	I0116 03:13:30.007722 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | Using SSH client type: external
	I0116 03:13:30.007752 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | Using SSH private key: /home/jenkins/minikube-integration/17967-971255/.minikube/machines/default-k8s-diff-port-775571/id_rsa (-rw-------)
	I0116 03:13:30.007791 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.158 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17967-971255/.minikube/machines/default-k8s-diff-port-775571/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0116 03:13:30.007807 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | About to run SSH command:
	I0116 03:13:30.007822 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | exit 0
	I0116 03:13:30.105862 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | SSH cmd err, output: <nil>: 
	I0116 03:13:30.106241 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetConfigRaw
	I0116 03:13:30.107063 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetIP
	I0116 03:13:30.110265 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | domain default-k8s-diff-port-775571 has defined MAC address 52:54:00:4b:bc:45 in network mk-default-k8s-diff-port-775571
	I0116 03:13:30.110754 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:bc:45", ip: ""} in network mk-default-k8s-diff-port-775571: {Iface:virbr4 ExpiryTime:2024-01-16 04:13:23 +0000 UTC Type:0 Mac:52:54:00:4b:bc:45 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-775571 Clientid:01:52:54:00:4b:bc:45}
	I0116 03:13:30.110788 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | domain default-k8s-diff-port-775571 has defined IP address 192.168.72.158 and MAC address 52:54:00:4b:bc:45 in network mk-default-k8s-diff-port-775571
	I0116 03:13:30.111070 1011955 profile.go:148] Saving config to /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/default-k8s-diff-port-775571/config.json ...
	I0116 03:13:30.111270 1011955 machine.go:88] provisioning docker machine ...
	I0116 03:13:30.111289 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .DriverName
	I0116 03:13:30.111511 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetMachineName
	I0116 03:13:30.111751 1011955 buildroot.go:166] provisioning hostname "default-k8s-diff-port-775571"
	I0116 03:13:30.111781 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetMachineName
	I0116 03:13:30.111987 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetSSHHostname
	I0116 03:13:30.114629 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | domain default-k8s-diff-port-775571 has defined MAC address 52:54:00:4b:bc:45 in network mk-default-k8s-diff-port-775571
	I0116 03:13:30.115002 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:bc:45", ip: ""} in network mk-default-k8s-diff-port-775571: {Iface:virbr4 ExpiryTime:2024-01-16 04:13:23 +0000 UTC Type:0 Mac:52:54:00:4b:bc:45 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-775571 Clientid:01:52:54:00:4b:bc:45}
	I0116 03:13:30.115032 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | domain default-k8s-diff-port-775571 has defined IP address 192.168.72.158 and MAC address 52:54:00:4b:bc:45 in network mk-default-k8s-diff-port-775571
	I0116 03:13:30.115205 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetSSHPort
	I0116 03:13:30.115375 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetSSHKeyPath
	I0116 03:13:30.115551 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetSSHKeyPath
	I0116 03:13:30.115706 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetSSHUsername
	I0116 03:13:30.115886 1011955 main.go:141] libmachine: Using SSH client type: native
	I0116 03:13:30.116340 1011955 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.72.158 22 <nil> <nil>}
	I0116 03:13:30.116363 1011955 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-775571 && echo "default-k8s-diff-port-775571" | sudo tee /etc/hostname
	I0116 03:13:30.260423 1011955 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-775571
	
	I0116 03:13:30.260451 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetSSHHostname
	I0116 03:13:30.263641 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | domain default-k8s-diff-port-775571 has defined MAC address 52:54:00:4b:bc:45 in network mk-default-k8s-diff-port-775571
	I0116 03:13:30.264075 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:bc:45", ip: ""} in network mk-default-k8s-diff-port-775571: {Iface:virbr4 ExpiryTime:2024-01-16 04:13:23 +0000 UTC Type:0 Mac:52:54:00:4b:bc:45 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-775571 Clientid:01:52:54:00:4b:bc:45}
	I0116 03:13:30.264117 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | domain default-k8s-diff-port-775571 has defined IP address 192.168.72.158 and MAC address 52:54:00:4b:bc:45 in network mk-default-k8s-diff-port-775571
	I0116 03:13:30.264539 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetSSHPort
	I0116 03:13:30.264776 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetSSHKeyPath
	I0116 03:13:30.264987 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetSSHKeyPath
	I0116 03:13:30.265162 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetSSHUsername
	I0116 03:13:30.265379 1011955 main.go:141] libmachine: Using SSH client type: native
	I0116 03:13:30.265894 1011955 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.72.158 22 <nil> <nil>}
	I0116 03:13:30.265929 1011955 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-775571' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-775571/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-775571' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0116 03:13:30.404028 1011955 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0116 03:13:30.404070 1011955 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17967-971255/.minikube CaCertPath:/home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17967-971255/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17967-971255/.minikube}
	I0116 03:13:30.404131 1011955 buildroot.go:174] setting up certificates
	I0116 03:13:30.404147 1011955 provision.go:83] configureAuth start
	I0116 03:13:30.404167 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetMachineName
	I0116 03:13:30.404539 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetIP
	I0116 03:13:30.407588 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | domain default-k8s-diff-port-775571 has defined MAC address 52:54:00:4b:bc:45 in network mk-default-k8s-diff-port-775571
	I0116 03:13:30.408002 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:bc:45", ip: ""} in network mk-default-k8s-diff-port-775571: {Iface:virbr4 ExpiryTime:2024-01-16 04:13:23 +0000 UTC Type:0 Mac:52:54:00:4b:bc:45 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-775571 Clientid:01:52:54:00:4b:bc:45}
	I0116 03:13:30.408036 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | domain default-k8s-diff-port-775571 has defined IP address 192.168.72.158 and MAC address 52:54:00:4b:bc:45 in network mk-default-k8s-diff-port-775571
	I0116 03:13:30.408229 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetSSHHostname
	I0116 03:13:30.410911 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | domain default-k8s-diff-port-775571 has defined MAC address 52:54:00:4b:bc:45 in network mk-default-k8s-diff-port-775571
	I0116 03:13:30.411309 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:bc:45", ip: ""} in network mk-default-k8s-diff-port-775571: {Iface:virbr4 ExpiryTime:2024-01-16 04:13:23 +0000 UTC Type:0 Mac:52:54:00:4b:bc:45 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-775571 Clientid:01:52:54:00:4b:bc:45}
	I0116 03:13:30.411362 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | domain default-k8s-diff-port-775571 has defined IP address 192.168.72.158 and MAC address 52:54:00:4b:bc:45 in network mk-default-k8s-diff-port-775571
	I0116 03:13:30.411463 1011955 provision.go:138] copyHostCerts
	I0116 03:13:30.411550 1011955 exec_runner.go:144] found /home/jenkins/minikube-integration/17967-971255/.minikube/ca.pem, removing ...
	I0116 03:13:30.411564 1011955 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17967-971255/.minikube/ca.pem
	I0116 03:13:30.411637 1011955 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17967-971255/.minikube/ca.pem (1082 bytes)
	I0116 03:13:30.411760 1011955 exec_runner.go:144] found /home/jenkins/minikube-integration/17967-971255/.minikube/cert.pem, removing ...
	I0116 03:13:30.411768 1011955 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17967-971255/.minikube/cert.pem
	I0116 03:13:30.411800 1011955 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17967-971255/.minikube/cert.pem (1123 bytes)
	I0116 03:13:30.411878 1011955 exec_runner.go:144] found /home/jenkins/minikube-integration/17967-971255/.minikube/key.pem, removing ...
	I0116 03:13:30.411891 1011955 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17967-971255/.minikube/key.pem
	I0116 03:13:30.411920 1011955 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17967-971255/.minikube/key.pem (1675 bytes)
	I0116 03:13:30.411983 1011955 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17967-971255/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-775571 san=[192.168.72.158 192.168.72.158 localhost 127.0.0.1 minikube default-k8s-diff-port-775571]
	I0116 03:13:30.478444 1011955 provision.go:172] copyRemoteCerts
	I0116 03:13:30.478520 1011955 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0116 03:13:30.478551 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetSSHHostname
	I0116 03:13:30.481824 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | domain default-k8s-diff-port-775571 has defined MAC address 52:54:00:4b:bc:45 in network mk-default-k8s-diff-port-775571
	I0116 03:13:30.482200 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:bc:45", ip: ""} in network mk-default-k8s-diff-port-775571: {Iface:virbr4 ExpiryTime:2024-01-16 04:13:23 +0000 UTC Type:0 Mac:52:54:00:4b:bc:45 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-775571 Clientid:01:52:54:00:4b:bc:45}
	I0116 03:13:30.482239 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | domain default-k8s-diff-port-775571 has defined IP address 192.168.72.158 and MAC address 52:54:00:4b:bc:45 in network mk-default-k8s-diff-port-775571
	I0116 03:13:30.482469 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetSSHPort
	I0116 03:13:30.482663 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetSSHKeyPath
	I0116 03:13:30.482870 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetSSHUsername
	I0116 03:13:30.483070 1011955 sshutil.go:53] new ssh client: &{IP:192.168.72.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/default-k8s-diff-port-775571/id_rsa Username:docker}
	I0116 03:13:31.280327 1011460 start.go:369] acquired machines lock for "no-preload-934668" in 56.48409901s
	I0116 03:13:31.280456 1011460 start.go:96] Skipping create...Using existing machine configuration
	I0116 03:13:31.280473 1011460 fix.go:54] fixHost starting: 
	I0116 03:13:31.280948 1011460 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:13:31.280986 1011460 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:13:31.302076 1011460 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39289
	I0116 03:13:31.302631 1011460 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:13:31.303270 1011460 main.go:141] libmachine: Using API Version  1
	I0116 03:13:31.303299 1011460 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:13:31.303700 1011460 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:13:31.304127 1011460 main.go:141] libmachine: (no-preload-934668) Calling .DriverName
	I0116 03:13:31.304681 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetState
	I0116 03:13:31.307845 1011460 fix.go:102] recreateIfNeeded on no-preload-934668: state=Stopped err=<nil>
	I0116 03:13:31.307882 1011460 main.go:141] libmachine: (no-preload-934668) Calling .DriverName
	W0116 03:13:31.308092 1011460 fix.go:128] unexpected machine state, will restart: <nil>
	I0116 03:13:31.310208 1011460 out.go:177] * Restarting existing kvm2 VM for "no-preload-934668" ...
	I0116 03:13:31.311591 1011460 main.go:141] libmachine: (no-preload-934668) Calling .Start
	I0116 03:13:31.311829 1011460 main.go:141] libmachine: (no-preload-934668) Ensuring networks are active...
	I0116 03:13:31.312840 1011460 main.go:141] libmachine: (no-preload-934668) Ensuring network default is active
	I0116 03:13:31.313302 1011460 main.go:141] libmachine: (no-preload-934668) Ensuring network mk-no-preload-934668 is active
	I0116 03:13:31.313756 1011460 main.go:141] libmachine: (no-preload-934668) Getting domain xml...
	I0116 03:13:31.314627 1011460 main.go:141] libmachine: (no-preload-934668) Creating domain...
	I0116 03:13:30.580435 1011955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0116 03:13:30.604188 1011955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0116 03:13:30.627877 1011955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0116 03:13:30.651737 1011955 provision.go:86] duration metric: configureAuth took 247.572907ms
	I0116 03:13:30.651768 1011955 buildroot.go:189] setting minikube options for container-runtime
	I0116 03:13:30.651949 1011955 config.go:182] Loaded profile config "default-k8s-diff-port-775571": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 03:13:30.652040 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetSSHHostname
	I0116 03:13:30.654855 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | domain default-k8s-diff-port-775571 has defined MAC address 52:54:00:4b:bc:45 in network mk-default-k8s-diff-port-775571
	I0116 03:13:30.655180 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:bc:45", ip: ""} in network mk-default-k8s-diff-port-775571: {Iface:virbr4 ExpiryTime:2024-01-16 04:13:23 +0000 UTC Type:0 Mac:52:54:00:4b:bc:45 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-775571 Clientid:01:52:54:00:4b:bc:45}
	I0116 03:13:30.655224 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | domain default-k8s-diff-port-775571 has defined IP address 192.168.72.158 and MAC address 52:54:00:4b:bc:45 in network mk-default-k8s-diff-port-775571
	I0116 03:13:30.655395 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetSSHPort
	I0116 03:13:30.655676 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetSSHKeyPath
	I0116 03:13:30.655874 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetSSHKeyPath
	I0116 03:13:30.656047 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetSSHUsername
	I0116 03:13:30.656231 1011955 main.go:141] libmachine: Using SSH client type: native
	I0116 03:13:30.656542 1011955 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.72.158 22 <nil> <nil>}
	I0116 03:13:30.656562 1011955 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0116 03:13:30.996593 1011955 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0116 03:13:30.996632 1011955 machine.go:91] provisioned docker machine in 885.348285ms
	I0116 03:13:30.996650 1011955 start.go:300] post-start starting for "default-k8s-diff-port-775571" (driver="kvm2")
	I0116 03:13:30.996669 1011955 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0116 03:13:30.996697 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .DriverName
	I0116 03:13:30.997187 1011955 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0116 03:13:30.997222 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetSSHHostname
	I0116 03:13:31.000071 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | domain default-k8s-diff-port-775571 has defined MAC address 52:54:00:4b:bc:45 in network mk-default-k8s-diff-port-775571
	I0116 03:13:31.000460 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:bc:45", ip: ""} in network mk-default-k8s-diff-port-775571: {Iface:virbr4 ExpiryTime:2024-01-16 04:13:23 +0000 UTC Type:0 Mac:52:54:00:4b:bc:45 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-775571 Clientid:01:52:54:00:4b:bc:45}
	I0116 03:13:31.000498 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | domain default-k8s-diff-port-775571 has defined IP address 192.168.72.158 and MAC address 52:54:00:4b:bc:45 in network mk-default-k8s-diff-port-775571
	I0116 03:13:31.000666 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetSSHPort
	I0116 03:13:31.000867 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetSSHKeyPath
	I0116 03:13:31.001030 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetSSHUsername
	I0116 03:13:31.001215 1011955 sshutil.go:53] new ssh client: &{IP:192.168.72.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/default-k8s-diff-port-775571/id_rsa Username:docker}
	I0116 03:13:31.102897 1011955 ssh_runner.go:195] Run: cat /etc/os-release
	I0116 03:13:31.107910 1011955 info.go:137] Remote host: Buildroot 2021.02.12
	I0116 03:13:31.107939 1011955 filesync.go:126] Scanning /home/jenkins/minikube-integration/17967-971255/.minikube/addons for local assets ...
	I0116 03:13:31.108003 1011955 filesync.go:126] Scanning /home/jenkins/minikube-integration/17967-971255/.minikube/files for local assets ...
	I0116 03:13:31.108076 1011955 filesync.go:149] local asset: /home/jenkins/minikube-integration/17967-971255/.minikube/files/etc/ssl/certs/9784822.pem -> 9784822.pem in /etc/ssl/certs
	I0116 03:13:31.108165 1011955 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0116 03:13:31.118591 1011955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/files/etc/ssl/certs/9784822.pem --> /etc/ssl/certs/9784822.pem (1708 bytes)
	I0116 03:13:31.144536 1011955 start.go:303] post-start completed in 147.864906ms
	I0116 03:13:31.144581 1011955 fix.go:56] fixHost completed within 21.109302207s
	I0116 03:13:31.144609 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetSSHHostname
	I0116 03:13:31.147887 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | domain default-k8s-diff-port-775571 has defined MAC address 52:54:00:4b:bc:45 in network mk-default-k8s-diff-port-775571
	I0116 03:13:31.148261 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:bc:45", ip: ""} in network mk-default-k8s-diff-port-775571: {Iface:virbr4 ExpiryTime:2024-01-16 04:13:23 +0000 UTC Type:0 Mac:52:54:00:4b:bc:45 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-775571 Clientid:01:52:54:00:4b:bc:45}
	I0116 03:13:31.148300 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | domain default-k8s-diff-port-775571 has defined IP address 192.168.72.158 and MAC address 52:54:00:4b:bc:45 in network mk-default-k8s-diff-port-775571
	I0116 03:13:31.148487 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetSSHPort
	I0116 03:13:31.148765 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetSSHKeyPath
	I0116 03:13:31.148980 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetSSHKeyPath
	I0116 03:13:31.149195 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetSSHUsername
	I0116 03:13:31.149426 1011955 main.go:141] libmachine: Using SSH client type: native
	I0116 03:13:31.149818 1011955 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.72.158 22 <nil> <nil>}
	I0116 03:13:31.149838 1011955 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0116 03:13:31.280175 1011955 main.go:141] libmachine: SSH cmd err, output: <nil>: 1705374811.251760286
	
	I0116 03:13:31.280203 1011955 fix.go:206] guest clock: 1705374811.251760286
	I0116 03:13:31.280210 1011955 fix.go:219] Guest: 2024-01-16 03:13:31.251760286 +0000 UTC Remote: 2024-01-16 03:13:31.144586974 +0000 UTC m=+275.673207404 (delta=107.173312ms)
	I0116 03:13:31.280231 1011955 fix.go:190] guest clock delta is within tolerance: 107.173312ms
	I0116 03:13:31.280242 1011955 start.go:83] releasing machines lock for "default-k8s-diff-port-775571", held for 21.244993059s
	I0116 03:13:31.280274 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .DriverName
	I0116 03:13:31.280606 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetIP
	I0116 03:13:31.284082 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | domain default-k8s-diff-port-775571 has defined MAC address 52:54:00:4b:bc:45 in network mk-default-k8s-diff-port-775571
	I0116 03:13:31.284580 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:bc:45", ip: ""} in network mk-default-k8s-diff-port-775571: {Iface:virbr4 ExpiryTime:2024-01-16 04:13:23 +0000 UTC Type:0 Mac:52:54:00:4b:bc:45 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-775571 Clientid:01:52:54:00:4b:bc:45}
	I0116 03:13:31.284627 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | domain default-k8s-diff-port-775571 has defined IP address 192.168.72.158 and MAC address 52:54:00:4b:bc:45 in network mk-default-k8s-diff-port-775571
	I0116 03:13:31.284960 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .DriverName
	I0116 03:13:31.285552 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .DriverName
	I0116 03:13:31.285784 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .DriverName
	I0116 03:13:31.285894 1011955 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0116 03:13:31.285954 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetSSHHostname
	I0116 03:13:31.286062 1011955 ssh_runner.go:195] Run: cat /version.json
	I0116 03:13:31.286081 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetSSHHostname
	I0116 03:13:31.289112 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | domain default-k8s-diff-port-775571 has defined MAC address 52:54:00:4b:bc:45 in network mk-default-k8s-diff-port-775571
	I0116 03:13:31.289486 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | domain default-k8s-diff-port-775571 has defined MAC address 52:54:00:4b:bc:45 in network mk-default-k8s-diff-port-775571
	I0116 03:13:31.289541 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:bc:45", ip: ""} in network mk-default-k8s-diff-port-775571: {Iface:virbr4 ExpiryTime:2024-01-16 04:13:23 +0000 UTC Type:0 Mac:52:54:00:4b:bc:45 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-775571 Clientid:01:52:54:00:4b:bc:45}
	I0116 03:13:31.289565 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | domain default-k8s-diff-port-775571 has defined IP address 192.168.72.158 and MAC address 52:54:00:4b:bc:45 in network mk-default-k8s-diff-port-775571
	I0116 03:13:31.289700 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetSSHPort
	I0116 03:13:31.289942 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:bc:45", ip: ""} in network mk-default-k8s-diff-port-775571: {Iface:virbr4 ExpiryTime:2024-01-16 04:13:23 +0000 UTC Type:0 Mac:52:54:00:4b:bc:45 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-775571 Clientid:01:52:54:00:4b:bc:45}
	I0116 03:13:31.289959 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetSSHKeyPath
	I0116 03:13:31.289969 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | domain default-k8s-diff-port-775571 has defined IP address 192.168.72.158 and MAC address 52:54:00:4b:bc:45 in network mk-default-k8s-diff-port-775571
	I0116 03:13:31.290169 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetSSHUsername
	I0116 03:13:31.290251 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetSSHPort
	I0116 03:13:31.290334 1011955 sshutil.go:53] new ssh client: &{IP:192.168.72.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/default-k8s-diff-port-775571/id_rsa Username:docker}
	I0116 03:13:31.290487 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetSSHKeyPath
	I0116 03:13:31.290643 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetSSHUsername
	I0116 03:13:31.290787 1011955 sshutil.go:53] new ssh client: &{IP:192.168.72.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/default-k8s-diff-port-775571/id_rsa Username:docker}
	I0116 03:13:31.412666 1011955 ssh_runner.go:195] Run: systemctl --version
	I0116 03:13:31.420934 1011955 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0116 03:13:31.571465 1011955 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0116 03:13:31.580180 1011955 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0116 03:13:31.580312 1011955 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0116 03:13:31.601148 1011955 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0116 03:13:31.601187 1011955 start.go:475] detecting cgroup driver to use...
	I0116 03:13:31.601274 1011955 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0116 03:13:31.622197 1011955 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0116 03:13:31.637047 1011955 docker.go:217] disabling cri-docker service (if available) ...
	I0116 03:13:31.637146 1011955 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0116 03:13:31.655781 1011955 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0116 03:13:31.678925 1011955 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0116 03:13:31.827298 1011955 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0116 03:13:31.973784 1011955 docker.go:233] disabling docker service ...
	I0116 03:13:31.973890 1011955 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0116 03:13:32.003399 1011955 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0116 03:13:32.022537 1011955 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0116 03:13:32.201640 1011955 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0116 03:13:32.336251 1011955 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0116 03:13:32.352402 1011955 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0116 03:13:32.376724 1011955 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0116 03:13:32.376796 1011955 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:13:32.387636 1011955 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0116 03:13:32.387721 1011955 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:13:32.399288 1011955 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:13:32.411777 1011955 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:13:32.425137 1011955 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0116 03:13:32.438308 1011955 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0116 03:13:32.451165 1011955 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0116 03:13:32.451246 1011955 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0116 03:13:32.467922 1011955 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0116 03:13:32.479144 1011955 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0116 03:13:32.651975 1011955 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0116 03:13:32.857869 1011955 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0116 03:13:32.857953 1011955 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0116 03:13:32.863869 1011955 start.go:543] Will wait 60s for crictl version
	I0116 03:13:32.863957 1011955 ssh_runner.go:195] Run: which crictl
	I0116 03:13:32.868179 1011955 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0116 03:13:32.917020 1011955 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0116 03:13:32.917111 1011955 ssh_runner.go:195] Run: crio --version
	I0116 03:13:32.970563 1011955 ssh_runner.go:195] Run: crio --version
	I0116 03:13:33.027800 1011955 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0116 03:13:29.966940 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:13:32.466746 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:13:30.212501 1011681 api_server.go:52] waiting for apiserver process to appear ...
	I0116 03:13:30.212577 1011681 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:13:30.712756 1011681 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:13:31.212694 1011681 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:13:31.713596 1011681 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:13:32.212767 1011681 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:13:32.258055 1011681 api_server.go:72] duration metric: took 2.045552104s to wait for apiserver process to appear ...
	I0116 03:13:32.258091 1011681 api_server.go:88] waiting for apiserver healthz status ...
	I0116 03:13:32.258118 1011681 api_server.go:253] Checking apiserver healthz at https://192.168.39.91:8443/healthz ...
	I0116 03:13:32.258807 1011681 api_server.go:269] stopped: https://192.168.39.91:8443/healthz: Get "https://192.168.39.91:8443/healthz": dial tcp 192.168.39.91:8443: connect: connection refused
	I0116 03:13:32.758305 1011681 api_server.go:253] Checking apiserver healthz at https://192.168.39.91:8443/healthz ...
	I0116 03:13:33.029157 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetIP
	I0116 03:13:33.032430 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | domain default-k8s-diff-port-775571 has defined MAC address 52:54:00:4b:bc:45 in network mk-default-k8s-diff-port-775571
	I0116 03:13:33.032824 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:bc:45", ip: ""} in network mk-default-k8s-diff-port-775571: {Iface:virbr4 ExpiryTime:2024-01-16 04:13:23 +0000 UTC Type:0 Mac:52:54:00:4b:bc:45 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-775571 Clientid:01:52:54:00:4b:bc:45}
	I0116 03:13:33.032860 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | domain default-k8s-diff-port-775571 has defined IP address 192.168.72.158 and MAC address 52:54:00:4b:bc:45 in network mk-default-k8s-diff-port-775571
	I0116 03:13:33.033077 1011955 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0116 03:13:33.037500 1011955 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 03:13:33.050478 1011955 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0116 03:13:33.050573 1011955 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 03:13:33.096041 1011955 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0116 03:13:33.096133 1011955 ssh_runner.go:195] Run: which lz4
	I0116 03:13:33.100546 1011955 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0116 03:13:33.105198 1011955 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0116 03:13:33.105234 1011955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0116 03:13:35.104728 1011955 crio.go:444] Took 2.004229 seconds to copy over tarball
	I0116 03:13:35.104817 1011955 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0116 03:13:32.655911 1011460 main.go:141] libmachine: (no-preload-934668) Waiting to get IP...
	I0116 03:13:32.657029 1011460 main.go:141] libmachine: (no-preload-934668) DBG | domain no-preload-934668 has defined MAC address 52:54:00:96:89:86 in network mk-no-preload-934668
	I0116 03:13:32.657609 1011460 main.go:141] libmachine: (no-preload-934668) DBG | unable to find current IP address of domain no-preload-934668 in network mk-no-preload-934668
	I0116 03:13:32.657728 1011460 main.go:141] libmachine: (no-preload-934668) DBG | I0116 03:13:32.657598 1012976 retry.go:31] will retry after 271.069608ms: waiting for machine to come up
	I0116 03:13:32.930214 1011460 main.go:141] libmachine: (no-preload-934668) DBG | domain no-preload-934668 has defined MAC address 52:54:00:96:89:86 in network mk-no-preload-934668
	I0116 03:13:32.930725 1011460 main.go:141] libmachine: (no-preload-934668) DBG | unable to find current IP address of domain no-preload-934668 in network mk-no-preload-934668
	I0116 03:13:32.930856 1011460 main.go:141] libmachine: (no-preload-934668) DBG | I0116 03:13:32.930775 1012976 retry.go:31] will retry after 377.793601ms: waiting for machine to come up
	I0116 03:13:33.310351 1011460 main.go:141] libmachine: (no-preload-934668) DBG | domain no-preload-934668 has defined MAC address 52:54:00:96:89:86 in network mk-no-preload-934668
	I0116 03:13:33.310835 1011460 main.go:141] libmachine: (no-preload-934668) DBG | unable to find current IP address of domain no-preload-934668 in network mk-no-preload-934668
	I0116 03:13:33.310897 1011460 main.go:141] libmachine: (no-preload-934668) DBG | I0116 03:13:33.310781 1012976 retry.go:31] will retry after 416.26092ms: waiting for machine to come up
	I0116 03:13:33.728484 1011460 main.go:141] libmachine: (no-preload-934668) DBG | domain no-preload-934668 has defined MAC address 52:54:00:96:89:86 in network mk-no-preload-934668
	I0116 03:13:33.729148 1011460 main.go:141] libmachine: (no-preload-934668) DBG | unable to find current IP address of domain no-preload-934668 in network mk-no-preload-934668
	I0116 03:13:33.729189 1011460 main.go:141] libmachine: (no-preload-934668) DBG | I0116 03:13:33.729011 1012976 retry.go:31] will retry after 608.181162ms: waiting for machine to come up
	I0116 03:13:34.339151 1011460 main.go:141] libmachine: (no-preload-934668) DBG | domain no-preload-934668 has defined MAC address 52:54:00:96:89:86 in network mk-no-preload-934668
	I0116 03:13:34.339614 1011460 main.go:141] libmachine: (no-preload-934668) DBG | unable to find current IP address of domain no-preload-934668 in network mk-no-preload-934668
	I0116 03:13:34.339642 1011460 main.go:141] libmachine: (no-preload-934668) DBG | I0116 03:13:34.339539 1012976 retry.go:31] will retry after 750.260968ms: waiting for machine to come up
	I0116 03:13:35.090870 1011460 main.go:141] libmachine: (no-preload-934668) DBG | domain no-preload-934668 has defined MAC address 52:54:00:96:89:86 in network mk-no-preload-934668
	I0116 03:13:35.091333 1011460 main.go:141] libmachine: (no-preload-934668) DBG | unable to find current IP address of domain no-preload-934668 in network mk-no-preload-934668
	I0116 03:13:35.091362 1011460 main.go:141] libmachine: (no-preload-934668) DBG | I0116 03:13:35.091285 1012976 retry.go:31] will retry after 700.212947ms: waiting for machine to come up
	I0116 03:13:35.793243 1011460 main.go:141] libmachine: (no-preload-934668) DBG | domain no-preload-934668 has defined MAC address 52:54:00:96:89:86 in network mk-no-preload-934668
	I0116 03:13:35.793740 1011460 main.go:141] libmachine: (no-preload-934668) DBG | unable to find current IP address of domain no-preload-934668 in network mk-no-preload-934668
	I0116 03:13:35.793774 1011460 main.go:141] libmachine: (no-preload-934668) DBG | I0116 03:13:35.793633 1012976 retry.go:31] will retry after 743.854004ms: waiting for machine to come up
	I0116 03:13:36.539322 1011460 main.go:141] libmachine: (no-preload-934668) DBG | domain no-preload-934668 has defined MAC address 52:54:00:96:89:86 in network mk-no-preload-934668
	I0116 03:13:36.539985 1011460 main.go:141] libmachine: (no-preload-934668) DBG | unable to find current IP address of domain no-preload-934668 in network mk-no-preload-934668
	I0116 03:13:36.540018 1011460 main.go:141] libmachine: (no-preload-934668) DBG | I0116 03:13:36.539939 1012976 retry.go:31] will retry after 1.305141922s: waiting for machine to come up
	I0116 03:13:34.974062 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:13:37.464767 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:13:37.759482 1011681 api_server.go:269] stopped: https://192.168.39.91:8443/healthz: Get "https://192.168.39.91:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0116 03:13:37.759559 1011681 api_server.go:253] Checking apiserver healthz at https://192.168.39.91:8443/healthz ...
	I0116 03:13:39.188258 1011681 api_server.go:279] https://192.168.39.91:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0116 03:13:39.188300 1011681 api_server.go:103] status: https://192.168.39.91:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0116 03:13:39.188322 1011681 api_server.go:253] Checking apiserver healthz at https://192.168.39.91:8443/healthz ...
	I0116 03:13:39.222005 1011681 api_server.go:279] https://192.168.39.91:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0116 03:13:39.222064 1011681 api_server.go:103] status: https://192.168.39.91:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0116 03:13:39.259251 1011681 api_server.go:253] Checking apiserver healthz at https://192.168.39.91:8443/healthz ...
	I0116 03:13:39.360385 1011681 api_server.go:279] https://192.168.39.91:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0116 03:13:39.360456 1011681 api_server.go:103] status: https://192.168.39.91:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0116 03:13:39.759006 1011681 api_server.go:253] Checking apiserver healthz at https://192.168.39.91:8443/healthz ...
	I0116 03:13:38.432521 1011955 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.327659635s)
	I0116 03:13:38.432570 1011955 crio.go:451] Took 3.327807 seconds to extract the tarball
	I0116 03:13:38.432585 1011955 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0116 03:13:38.477872 1011955 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 03:13:38.535414 1011955 crio.go:496] all images are preloaded for cri-o runtime.
	I0116 03:13:38.535442 1011955 cache_images.go:84] Images are preloaded, skipping loading
	I0116 03:13:38.535510 1011955 ssh_runner.go:195] Run: crio config
	I0116 03:13:38.604605 1011955 cni.go:84] Creating CNI manager for ""
	I0116 03:13:38.604636 1011955 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 03:13:38.604663 1011955 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0116 03:13:38.604690 1011955 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.158 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-775571 NodeName:default-k8s-diff-port-775571 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.158"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.158 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0116 03:13:38.604871 1011955 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.158
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-775571"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.158
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.158"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0116 03:13:38.604946 1011955 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-775571 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.158
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-775571 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0116 03:13:38.605006 1011955 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0116 03:13:38.619020 1011955 binaries.go:44] Found k8s binaries, skipping transfer
	I0116 03:13:38.619106 1011955 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0116 03:13:38.633715 1011955 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (388 bytes)
	I0116 03:13:38.651239 1011955 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0116 03:13:38.670877 1011955 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2115 bytes)
	I0116 03:13:38.689268 1011955 ssh_runner.go:195] Run: grep 192.168.72.158	control-plane.minikube.internal$ /etc/hosts
	I0116 03:13:38.694783 1011955 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.158	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 03:13:38.709936 1011955 certs.go:56] Setting up /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/default-k8s-diff-port-775571 for IP: 192.168.72.158
	I0116 03:13:38.709984 1011955 certs.go:190] acquiring lock for shared ca certs: {Name:mk7c5920652340a030ac1e36e0fe21cc8437c491 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:13:38.710196 1011955 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17967-971255/.minikube/ca.key
	I0116 03:13:38.710269 1011955 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17967-971255/.minikube/proxy-client-ca.key
	I0116 03:13:38.710379 1011955 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/default-k8s-diff-port-775571/client.key
	I0116 03:13:38.710471 1011955 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/default-k8s-diff-port-775571/apiserver.key.6c936bf0
	I0116 03:13:38.710533 1011955 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/default-k8s-diff-port-775571/proxy-client.key
	I0116 03:13:38.710677 1011955 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/home/jenkins/minikube-integration/17967-971255/.minikube/certs/978482.pem (1338 bytes)
	W0116 03:13:38.710717 1011955 certs.go:433] ignoring /home/jenkins/minikube-integration/17967-971255/.minikube/certs/home/jenkins/minikube-integration/17967-971255/.minikube/certs/978482_empty.pem, impossibly tiny 0 bytes
	I0116 03:13:38.710734 1011955 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca-key.pem (1675 bytes)
	I0116 03:13:38.710771 1011955 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca.pem (1082 bytes)
	I0116 03:13:38.710810 1011955 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/home/jenkins/minikube-integration/17967-971255/.minikube/certs/cert.pem (1123 bytes)
	I0116 03:13:38.710849 1011955 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/home/jenkins/minikube-integration/17967-971255/.minikube/certs/key.pem (1675 bytes)
	I0116 03:13:38.710911 1011955 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-971255/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17967-971255/.minikube/files/etc/ssl/certs/9784822.pem (1708 bytes)
	I0116 03:13:38.711657 1011955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/default-k8s-diff-port-775571/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0116 03:13:38.742564 1011955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/default-k8s-diff-port-775571/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0116 03:13:38.770741 1011955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/default-k8s-diff-port-775571/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0116 03:13:38.795401 1011955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/default-k8s-diff-port-775571/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0116 03:13:38.819574 1011955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0116 03:13:38.847962 1011955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0116 03:13:38.872537 1011955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0116 03:13:38.898930 1011955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0116 03:13:38.924558 1011955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0116 03:13:38.950417 1011955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/certs/978482.pem --> /usr/share/ca-certificates/978482.pem (1338 bytes)
	I0116 03:13:38.976115 1011955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/files/etc/ssl/certs/9784822.pem --> /usr/share/ca-certificates/9784822.pem (1708 bytes)
	I0116 03:13:39.008493 1011955 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0116 03:13:39.028392 1011955 ssh_runner.go:195] Run: openssl version
	I0116 03:13:39.034429 1011955 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/978482.pem && ln -fs /usr/share/ca-certificates/978482.pem /etc/ssl/certs/978482.pem"
	I0116 03:13:39.046541 1011955 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/978482.pem
	I0116 03:13:39.051560 1011955 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 16 02:10 /usr/share/ca-certificates/978482.pem
	I0116 03:13:39.051656 1011955 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/978482.pem
	I0116 03:13:39.058169 1011955 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/978482.pem /etc/ssl/certs/51391683.0"
	I0116 03:13:39.072168 1011955 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9784822.pem && ln -fs /usr/share/ca-certificates/9784822.pem /etc/ssl/certs/9784822.pem"
	I0116 03:13:39.086485 1011955 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9784822.pem
	I0116 03:13:39.091108 1011955 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 16 02:10 /usr/share/ca-certificates/9784822.pem
	I0116 03:13:39.091162 1011955 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9784822.pem
	I0116 03:13:39.098393 1011955 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9784822.pem /etc/ssl/certs/3ec20f2e.0"
	I0116 03:13:39.109323 1011955 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0116 03:13:39.121606 1011955 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:13:39.127187 1011955 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 16 02:01 /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:13:39.127263 1011955 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:13:39.134830 1011955 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0116 03:13:39.149731 1011955 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0116 03:13:39.156181 1011955 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0116 03:13:39.164095 1011955 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0116 03:13:39.172662 1011955 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0116 03:13:39.180598 1011955 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0116 03:13:39.188640 1011955 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0116 03:13:39.197249 1011955 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0116 03:13:39.206289 1011955 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-775571 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:default-k8s-diff-port-775571 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.72.158 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extr
aDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 03:13:39.206442 1011955 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0116 03:13:39.206509 1011955 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 03:13:39.259399 1011955 cri.go:89] found id: ""
	I0116 03:13:39.259481 1011955 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0116 03:13:39.273356 1011955 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0116 03:13:39.273385 1011955 kubeadm.go:636] restartCluster start
	I0116 03:13:39.273474 1011955 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0116 03:13:39.287459 1011955 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:39.288748 1011955 kubeconfig.go:92] found "default-k8s-diff-port-775571" server: "https://192.168.72.158:8444"
	I0116 03:13:39.291777 1011955 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0116 03:13:39.304936 1011955 api_server.go:166] Checking apiserver status ...
	I0116 03:13:39.305013 1011955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:39.321035 1011955 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:39.805691 1011955 api_server.go:166] Checking apiserver status ...
	I0116 03:13:39.805843 1011955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:39.821119 1011955 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:40.305352 1011955 api_server.go:166] Checking apiserver status ...
	I0116 03:13:40.305464 1011955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:40.320908 1011955 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:40.205526 1011681 api_server.go:279] https://192.168.39.91:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0116 03:13:40.417347 1011681 api_server.go:103] status: https://192.168.39.91:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0116 03:13:40.417381 1011681 api_server.go:253] Checking apiserver healthz at https://192.168.39.91:8443/healthz ...
	I0116 03:13:40.626819 1011681 api_server.go:279] https://192.168.39.91:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0116 03:13:40.626875 1011681 api_server.go:103] status: https://192.168.39.91:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0116 03:13:40.759016 1011681 api_server.go:253] Checking apiserver healthz at https://192.168.39.91:8443/healthz ...
	I0116 03:13:40.769794 1011681 api_server.go:279] https://192.168.39.91:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0116 03:13:40.769867 1011681 api_server.go:103] status: https://192.168.39.91:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0116 03:13:41.258280 1011681 api_server.go:253] Checking apiserver healthz at https://192.168.39.91:8443/healthz ...
	I0116 03:13:41.268104 1011681 api_server.go:279] https://192.168.39.91:8443/healthz returned 200:
	ok
	I0116 03:13:41.276527 1011681 api_server.go:141] control plane version: v1.16.0
	I0116 03:13:41.276576 1011681 api_server.go:131] duration metric: took 9.018477008s to wait for apiserver health ...
	I0116 03:13:41.276587 1011681 cni.go:84] Creating CNI manager for ""
	I0116 03:13:41.276593 1011681 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 03:13:41.278640 1011681 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0116 03:13:37.847223 1011460 main.go:141] libmachine: (no-preload-934668) DBG | domain no-preload-934668 has defined MAC address 52:54:00:96:89:86 in network mk-no-preload-934668
	I0116 03:13:37.847666 1011460 main.go:141] libmachine: (no-preload-934668) DBG | unable to find current IP address of domain no-preload-934668 in network mk-no-preload-934668
	I0116 03:13:37.847702 1011460 main.go:141] libmachine: (no-preload-934668) DBG | I0116 03:13:37.847614 1012976 retry.go:31] will retry after 1.639650566s: waiting for machine to come up
	I0116 03:13:39.488850 1011460 main.go:141] libmachine: (no-preload-934668) DBG | domain no-preload-934668 has defined MAC address 52:54:00:96:89:86 in network mk-no-preload-934668
	I0116 03:13:39.489197 1011460 main.go:141] libmachine: (no-preload-934668) DBG | unable to find current IP address of domain no-preload-934668 in network mk-no-preload-934668
	I0116 03:13:39.489230 1011460 main.go:141] libmachine: (no-preload-934668) DBG | I0116 03:13:39.489145 1012976 retry.go:31] will retry after 2.106627157s: waiting for machine to come up
	I0116 03:13:41.598019 1011460 main.go:141] libmachine: (no-preload-934668) DBG | domain no-preload-934668 has defined MAC address 52:54:00:96:89:86 in network mk-no-preload-934668
	I0116 03:13:41.598601 1011460 main.go:141] libmachine: (no-preload-934668) DBG | unable to find current IP address of domain no-preload-934668 in network mk-no-preload-934668
	I0116 03:13:41.598635 1011460 main.go:141] libmachine: (no-preload-934668) DBG | I0116 03:13:41.598540 1012976 retry.go:31] will retry after 2.493521899s: waiting for machine to come up
	I0116 03:13:39.963772 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:13:41.965748 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:13:41.280699 1011681 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0116 03:13:41.300296 1011681 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0116 03:13:41.341944 1011681 system_pods.go:43] waiting for kube-system pods to appear ...
	I0116 03:13:41.361578 1011681 system_pods.go:59] 7 kube-system pods found
	I0116 03:13:41.361618 1011681 system_pods.go:61] "coredns-5644d7b6d9-5j7ps" [d1ccd80c-b19b-49ae-bc1c-deee7f0db229] Running
	I0116 03:13:41.361627 1011681 system_pods.go:61] "etcd-old-k8s-version-788237" [4a34c524-dce0-4c01-a1f2-291a59c02044] Running
	I0116 03:13:41.361634 1011681 system_pods.go:61] "kube-apiserver-old-k8s-version-788237" [2b802f72-d63e-423d-ac43-89b836bd4b70] Running
	I0116 03:13:41.361640 1011681 system_pods.go:61] "kube-controller-manager-old-k8s-version-788237" [a41d42f1-0587-4cb6-965f-fffdb8bcde5d] Running
	I0116 03:13:41.361645 1011681 system_pods.go:61] "kube-proxy-vtxjk" [4993e4ef-5193-4632-a61a-a0b38601239d] Running
	I0116 03:13:41.361651 1011681 system_pods.go:61] "kube-scheduler-old-k8s-version-788237" [712a30dc-0217-47d4-88ba-d63f6f2f6d02] Running
	I0116 03:13:41.361662 1011681 system_pods.go:61] "storage-provisioner" [2e43ef59-3c6b-4c78-81ae-71dbd0eaddfd] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0116 03:13:41.361680 1011681 system_pods.go:74] duration metric: took 19.701772ms to wait for pod list to return data ...
	I0116 03:13:41.361698 1011681 node_conditions.go:102] verifying NodePressure condition ...
	I0116 03:13:41.366876 1011681 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 03:13:41.366918 1011681 node_conditions.go:123] node cpu capacity is 2
	I0116 03:13:41.366933 1011681 node_conditions.go:105] duration metric: took 5.228319ms to run NodePressure ...
	I0116 03:13:41.366961 1011681 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:13:41.921064 1011681 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0116 03:13:41.925272 1011681 retry.go:31] will retry after 140.477343ms: kubelet not initialised
	I0116 03:13:42.072065 1011681 retry.go:31] will retry after 346.605533ms: kubelet not initialised
	I0116 03:13:42.428950 1011681 retry.go:31] will retry after 456.811796ms: kubelet not initialised
	I0116 03:13:42.893528 1011681 retry.go:31] will retry after 821.458486ms: kubelet not initialised
	I0116 03:13:43.721228 1011681 retry.go:31] will retry after 1.260888799s: kubelet not initialised
	I0116 03:13:44.988346 1011681 retry.go:31] will retry after 1.183564266s: kubelet not initialised
	I0116 03:13:40.805756 1011955 api_server.go:166] Checking apiserver status ...
	I0116 03:13:40.805890 1011955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:40.823823 1011955 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:41.305065 1011955 api_server.go:166] Checking apiserver status ...
	I0116 03:13:41.305161 1011955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:41.317967 1011955 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:41.805703 1011955 api_server.go:166] Checking apiserver status ...
	I0116 03:13:41.805813 1011955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:41.819698 1011955 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:42.305067 1011955 api_server.go:166] Checking apiserver status ...
	I0116 03:13:42.305209 1011955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:42.318643 1011955 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:42.805284 1011955 api_server.go:166] Checking apiserver status ...
	I0116 03:13:42.805381 1011955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:42.821975 1011955 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:43.305106 1011955 api_server.go:166] Checking apiserver status ...
	I0116 03:13:43.305234 1011955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:43.318457 1011955 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:43.805741 1011955 api_server.go:166] Checking apiserver status ...
	I0116 03:13:43.805902 1011955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:43.820562 1011955 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:44.305077 1011955 api_server.go:166] Checking apiserver status ...
	I0116 03:13:44.305217 1011955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:44.322452 1011955 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:44.805978 1011955 api_server.go:166] Checking apiserver status ...
	I0116 03:13:44.806111 1011955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:44.822302 1011955 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:45.305330 1011955 api_server.go:166] Checking apiserver status ...
	I0116 03:13:45.305432 1011955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:45.317788 1011955 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:44.095061 1011460 main.go:141] libmachine: (no-preload-934668) DBG | domain no-preload-934668 has defined MAC address 52:54:00:96:89:86 in network mk-no-preload-934668
	I0116 03:13:44.095629 1011460 main.go:141] libmachine: (no-preload-934668) DBG | unable to find current IP address of domain no-preload-934668 in network mk-no-preload-934668
	I0116 03:13:44.095658 1011460 main.go:141] libmachine: (no-preload-934668) DBG | I0116 03:13:44.095576 1012976 retry.go:31] will retry after 3.106364447s: waiting for machine to come up
	I0116 03:13:47.203798 1011460 main.go:141] libmachine: (no-preload-934668) DBG | domain no-preload-934668 has defined MAC address 52:54:00:96:89:86 in network mk-no-preload-934668
	I0116 03:13:47.204278 1011460 main.go:141] libmachine: (no-preload-934668) DBG | unable to find current IP address of domain no-preload-934668 in network mk-no-preload-934668
	I0116 03:13:47.204310 1011460 main.go:141] libmachine: (no-preload-934668) DBG | I0116 03:13:47.204216 1012976 retry.go:31] will retry after 3.186263998s: waiting for machine to come up
	I0116 03:13:44.462154 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:13:46.467556 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:13:46.177475 1011681 retry.go:31] will retry after 2.879508446s: kubelet not initialised
	I0116 03:13:49.062319 1011681 retry.go:31] will retry after 3.01676683s: kubelet not initialised
	I0116 03:13:45.805770 1011955 api_server.go:166] Checking apiserver status ...
	I0116 03:13:45.805896 1011955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:45.822222 1011955 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:46.305853 1011955 api_server.go:166] Checking apiserver status ...
	I0116 03:13:46.305977 1011955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:46.322927 1011955 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:46.805392 1011955 api_server.go:166] Checking apiserver status ...
	I0116 03:13:46.805501 1011955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:46.822012 1011955 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:47.305518 1011955 api_server.go:166] Checking apiserver status ...
	I0116 03:13:47.305634 1011955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:47.322371 1011955 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:47.805932 1011955 api_server.go:166] Checking apiserver status ...
	I0116 03:13:47.806027 1011955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:47.821119 1011955 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:48.305696 1011955 api_server.go:166] Checking apiserver status ...
	I0116 03:13:48.305832 1011955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:48.318366 1011955 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:48.805946 1011955 api_server.go:166] Checking apiserver status ...
	I0116 03:13:48.806039 1011955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:48.819066 1011955 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:49.305780 1011955 api_server.go:166] Checking apiserver status ...
	I0116 03:13:49.305922 1011955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:49.318542 1011955 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:49.318576 1011955 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0116 03:13:49.318588 1011955 kubeadm.go:1135] stopping kube-system containers ...
	I0116 03:13:49.318602 1011955 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0116 03:13:49.318663 1011955 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 03:13:49.361552 1011955 cri.go:89] found id: ""
	I0116 03:13:49.361636 1011955 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0116 03:13:49.378478 1011955 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0116 03:13:49.389158 1011955 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0116 03:13:49.389248 1011955 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0116 03:13:49.398973 1011955 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0116 03:13:49.399019 1011955 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:13:49.516974 1011955 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:13:50.394812 1011460 main.go:141] libmachine: (no-preload-934668) DBG | domain no-preload-934668 has defined MAC address 52:54:00:96:89:86 in network mk-no-preload-934668
	I0116 03:13:50.395295 1011460 main.go:141] libmachine: (no-preload-934668) DBG | domain no-preload-934668 has current primary IP address 192.168.50.29 and MAC address 52:54:00:96:89:86 in network mk-no-preload-934668
	I0116 03:13:50.395323 1011460 main.go:141] libmachine: (no-preload-934668) Found IP for machine: 192.168.50.29
	I0116 03:13:50.395338 1011460 main.go:141] libmachine: (no-preload-934668) Reserving static IP address...
	I0116 03:13:50.395804 1011460 main.go:141] libmachine: (no-preload-934668) DBG | found host DHCP lease matching {name: "no-preload-934668", mac: "52:54:00:96:89:86", ip: "192.168.50.29"} in network mk-no-preload-934668: {Iface:virbr2 ExpiryTime:2024-01-16 04:13:45 +0000 UTC Type:0 Mac:52:54:00:96:89:86 Iaid: IPaddr:192.168.50.29 Prefix:24 Hostname:no-preload-934668 Clientid:01:52:54:00:96:89:86}
	I0116 03:13:50.395830 1011460 main.go:141] libmachine: (no-preload-934668) Reserved static IP address: 192.168.50.29
	I0116 03:13:50.395851 1011460 main.go:141] libmachine: (no-preload-934668) DBG | skip adding static IP to network mk-no-preload-934668 - found existing host DHCP lease matching {name: "no-preload-934668", mac: "52:54:00:96:89:86", ip: "192.168.50.29"}
	I0116 03:13:50.395880 1011460 main.go:141] libmachine: (no-preload-934668) DBG | Getting to WaitForSSH function...
	I0116 03:13:50.395898 1011460 main.go:141] libmachine: (no-preload-934668) Waiting for SSH to be available...
	I0116 03:13:50.398256 1011460 main.go:141] libmachine: (no-preload-934668) DBG | domain no-preload-934668 has defined MAC address 52:54:00:96:89:86 in network mk-no-preload-934668
	I0116 03:13:50.398608 1011460 main.go:141] libmachine: (no-preload-934668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:89:86", ip: ""} in network mk-no-preload-934668: {Iface:virbr2 ExpiryTime:2024-01-16 04:13:45 +0000 UTC Type:0 Mac:52:54:00:96:89:86 Iaid: IPaddr:192.168.50.29 Prefix:24 Hostname:no-preload-934668 Clientid:01:52:54:00:96:89:86}
	I0116 03:13:50.398652 1011460 main.go:141] libmachine: (no-preload-934668) DBG | domain no-preload-934668 has defined IP address 192.168.50.29 and MAC address 52:54:00:96:89:86 in network mk-no-preload-934668
	I0116 03:13:50.398838 1011460 main.go:141] libmachine: (no-preload-934668) DBG | Using SSH client type: external
	I0116 03:13:50.398864 1011460 main.go:141] libmachine: (no-preload-934668) DBG | Using SSH private key: /home/jenkins/minikube-integration/17967-971255/.minikube/machines/no-preload-934668/id_rsa (-rw-------)
	I0116 03:13:50.398917 1011460 main.go:141] libmachine: (no-preload-934668) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.29 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17967-971255/.minikube/machines/no-preload-934668/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0116 03:13:50.398936 1011460 main.go:141] libmachine: (no-preload-934668) DBG | About to run SSH command:
	I0116 03:13:50.398949 1011460 main.go:141] libmachine: (no-preload-934668) DBG | exit 0
	I0116 03:13:50.489493 1011460 main.go:141] libmachine: (no-preload-934668) DBG | SSH cmd err, output: <nil>: 
	I0116 03:13:50.489954 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetConfigRaw
	I0116 03:13:50.490626 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetIP
	I0116 03:13:50.493468 1011460 main.go:141] libmachine: (no-preload-934668) DBG | domain no-preload-934668 has defined MAC address 52:54:00:96:89:86 in network mk-no-preload-934668
	I0116 03:13:50.493892 1011460 main.go:141] libmachine: (no-preload-934668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:89:86", ip: ""} in network mk-no-preload-934668: {Iface:virbr2 ExpiryTime:2024-01-16 04:13:45 +0000 UTC Type:0 Mac:52:54:00:96:89:86 Iaid: IPaddr:192.168.50.29 Prefix:24 Hostname:no-preload-934668 Clientid:01:52:54:00:96:89:86}
	I0116 03:13:50.493943 1011460 main.go:141] libmachine: (no-preload-934668) DBG | domain no-preload-934668 has defined IP address 192.168.50.29 and MAC address 52:54:00:96:89:86 in network mk-no-preload-934668
	I0116 03:13:50.494329 1011460 profile.go:148] Saving config to /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/no-preload-934668/config.json ...
	I0116 03:13:50.494545 1011460 machine.go:88] provisioning docker machine ...
	I0116 03:13:50.494566 1011460 main.go:141] libmachine: (no-preload-934668) Calling .DriverName
	I0116 03:13:50.494837 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetMachineName
	I0116 03:13:50.495038 1011460 buildroot.go:166] provisioning hostname "no-preload-934668"
	I0116 03:13:50.495067 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetMachineName
	I0116 03:13:50.495216 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetSSHHostname
	I0116 03:13:50.497623 1011460 main.go:141] libmachine: (no-preload-934668) DBG | domain no-preload-934668 has defined MAC address 52:54:00:96:89:86 in network mk-no-preload-934668
	I0116 03:13:50.498048 1011460 main.go:141] libmachine: (no-preload-934668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:89:86", ip: ""} in network mk-no-preload-934668: {Iface:virbr2 ExpiryTime:2024-01-16 04:13:45 +0000 UTC Type:0 Mac:52:54:00:96:89:86 Iaid: IPaddr:192.168.50.29 Prefix:24 Hostname:no-preload-934668 Clientid:01:52:54:00:96:89:86}
	I0116 03:13:50.498068 1011460 main.go:141] libmachine: (no-preload-934668) DBG | domain no-preload-934668 has defined IP address 192.168.50.29 and MAC address 52:54:00:96:89:86 in network mk-no-preload-934668
	I0116 03:13:50.498226 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetSSHPort
	I0116 03:13:50.498413 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetSSHKeyPath
	I0116 03:13:50.498569 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetSSHKeyPath
	I0116 03:13:50.498711 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetSSHUsername
	I0116 03:13:50.498887 1011460 main.go:141] libmachine: Using SSH client type: native
	I0116 03:13:50.499381 1011460 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.50.29 22 <nil> <nil>}
	I0116 03:13:50.499400 1011460 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-934668 && echo "no-preload-934668" | sudo tee /etc/hostname
	I0116 03:13:50.632759 1011460 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-934668
	
	I0116 03:13:50.632795 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetSSHHostname
	I0116 03:13:50.636057 1011460 main.go:141] libmachine: (no-preload-934668) DBG | domain no-preload-934668 has defined MAC address 52:54:00:96:89:86 in network mk-no-preload-934668
	I0116 03:13:50.636489 1011460 main.go:141] libmachine: (no-preload-934668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:89:86", ip: ""} in network mk-no-preload-934668: {Iface:virbr2 ExpiryTime:2024-01-16 04:13:45 +0000 UTC Type:0 Mac:52:54:00:96:89:86 Iaid: IPaddr:192.168.50.29 Prefix:24 Hostname:no-preload-934668 Clientid:01:52:54:00:96:89:86}
	I0116 03:13:50.636523 1011460 main.go:141] libmachine: (no-preload-934668) DBG | domain no-preload-934668 has defined IP address 192.168.50.29 and MAC address 52:54:00:96:89:86 in network mk-no-preload-934668
	I0116 03:13:50.636684 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetSSHPort
	I0116 03:13:50.636965 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetSSHKeyPath
	I0116 03:13:50.637189 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetSSHKeyPath
	I0116 03:13:50.637383 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetSSHUsername
	I0116 03:13:50.637560 1011460 main.go:141] libmachine: Using SSH client type: native
	I0116 03:13:50.637994 1011460 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.50.29 22 <nil> <nil>}
	I0116 03:13:50.638021 1011460 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-934668' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-934668/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-934668' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0116 03:13:50.765312 1011460 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0116 03:13:50.765351 1011460 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17967-971255/.minikube CaCertPath:/home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17967-971255/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17967-971255/.minikube}
	I0116 03:13:50.765380 1011460 buildroot.go:174] setting up certificates
	I0116 03:13:50.765395 1011460 provision.go:83] configureAuth start
	I0116 03:13:50.765408 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetMachineName
	I0116 03:13:50.765746 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetIP
	I0116 03:13:50.769190 1011460 main.go:141] libmachine: (no-preload-934668) DBG | domain no-preload-934668 has defined MAC address 52:54:00:96:89:86 in network mk-no-preload-934668
	I0116 03:13:50.769597 1011460 main.go:141] libmachine: (no-preload-934668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:89:86", ip: ""} in network mk-no-preload-934668: {Iface:virbr2 ExpiryTime:2024-01-16 04:13:45 +0000 UTC Type:0 Mac:52:54:00:96:89:86 Iaid: IPaddr:192.168.50.29 Prefix:24 Hostname:no-preload-934668 Clientid:01:52:54:00:96:89:86}
	I0116 03:13:50.769670 1011460 main.go:141] libmachine: (no-preload-934668) DBG | domain no-preload-934668 has defined IP address 192.168.50.29 and MAC address 52:54:00:96:89:86 in network mk-no-preload-934668
	I0116 03:13:50.769902 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetSSHHostname
	I0116 03:13:50.772879 1011460 main.go:141] libmachine: (no-preload-934668) DBG | domain no-preload-934668 has defined MAC address 52:54:00:96:89:86 in network mk-no-preload-934668
	I0116 03:13:50.773334 1011460 main.go:141] libmachine: (no-preload-934668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:89:86", ip: ""} in network mk-no-preload-934668: {Iface:virbr2 ExpiryTime:2024-01-16 04:13:45 +0000 UTC Type:0 Mac:52:54:00:96:89:86 Iaid: IPaddr:192.168.50.29 Prefix:24 Hostname:no-preload-934668 Clientid:01:52:54:00:96:89:86}
	I0116 03:13:50.773367 1011460 main.go:141] libmachine: (no-preload-934668) DBG | domain no-preload-934668 has defined IP address 192.168.50.29 and MAC address 52:54:00:96:89:86 in network mk-no-preload-934668
	I0116 03:13:50.773660 1011460 provision.go:138] copyHostCerts
	I0116 03:13:50.773750 1011460 exec_runner.go:144] found /home/jenkins/minikube-integration/17967-971255/.minikube/ca.pem, removing ...
	I0116 03:13:50.773766 1011460 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17967-971255/.minikube/ca.pem
	I0116 03:13:50.773868 1011460 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17967-971255/.minikube/ca.pem (1082 bytes)
	I0116 03:13:50.774025 1011460 exec_runner.go:144] found /home/jenkins/minikube-integration/17967-971255/.minikube/cert.pem, removing ...
	I0116 03:13:50.774043 1011460 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17967-971255/.minikube/cert.pem
	I0116 03:13:50.774077 1011460 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17967-971255/.minikube/cert.pem (1123 bytes)
	I0116 03:13:50.774174 1011460 exec_runner.go:144] found /home/jenkins/minikube-integration/17967-971255/.minikube/key.pem, removing ...
	I0116 03:13:50.774187 1011460 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17967-971255/.minikube/key.pem
	I0116 03:13:50.774221 1011460 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17967-971255/.minikube/key.pem (1675 bytes)
	I0116 03:13:50.774317 1011460 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17967-971255/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca-key.pem org=jenkins.no-preload-934668 san=[192.168.50.29 192.168.50.29 localhost 127.0.0.1 minikube no-preload-934668]
	I0116 03:13:50.955273 1011460 provision.go:172] copyRemoteCerts
	I0116 03:13:50.955364 1011460 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0116 03:13:50.955404 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetSSHHostname
	I0116 03:13:50.958601 1011460 main.go:141] libmachine: (no-preload-934668) DBG | domain no-preload-934668 has defined MAC address 52:54:00:96:89:86 in network mk-no-preload-934668
	I0116 03:13:50.958977 1011460 main.go:141] libmachine: (no-preload-934668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:89:86", ip: ""} in network mk-no-preload-934668: {Iface:virbr2 ExpiryTime:2024-01-16 04:13:45 +0000 UTC Type:0 Mac:52:54:00:96:89:86 Iaid: IPaddr:192.168.50.29 Prefix:24 Hostname:no-preload-934668 Clientid:01:52:54:00:96:89:86}
	I0116 03:13:50.959013 1011460 main.go:141] libmachine: (no-preload-934668) DBG | domain no-preload-934668 has defined IP address 192.168.50.29 and MAC address 52:54:00:96:89:86 in network mk-no-preload-934668
	I0116 03:13:50.959258 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetSSHPort
	I0116 03:13:50.959495 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetSSHKeyPath
	I0116 03:13:50.959704 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetSSHUsername
	I0116 03:13:50.959878 1011460 sshutil.go:53] new ssh client: &{IP:192.168.50.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/no-preload-934668/id_rsa Username:docker}
	I0116 03:13:51.047852 1011460 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0116 03:13:51.079250 1011460 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0116 03:13:51.110170 1011460 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0116 03:13:51.137342 1011460 provision.go:86] duration metric: configureAuth took 371.929858ms
	I0116 03:13:51.137376 1011460 buildroot.go:189] setting minikube options for container-runtime
	I0116 03:13:51.137602 1011460 config.go:182] Loaded profile config "no-preload-934668": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0116 03:13:51.137690 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetSSHHostname
	I0116 03:13:51.140451 1011460 main.go:141] libmachine: (no-preload-934668) DBG | domain no-preload-934668 has defined MAC address 52:54:00:96:89:86 in network mk-no-preload-934668
	I0116 03:13:51.140935 1011460 main.go:141] libmachine: (no-preload-934668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:89:86", ip: ""} in network mk-no-preload-934668: {Iface:virbr2 ExpiryTime:2024-01-16 04:13:45 +0000 UTC Type:0 Mac:52:54:00:96:89:86 Iaid: IPaddr:192.168.50.29 Prefix:24 Hostname:no-preload-934668 Clientid:01:52:54:00:96:89:86}
	I0116 03:13:51.140963 1011460 main.go:141] libmachine: (no-preload-934668) DBG | domain no-preload-934668 has defined IP address 192.168.50.29 and MAC address 52:54:00:96:89:86 in network mk-no-preload-934668
	I0116 03:13:51.141217 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetSSHPort
	I0116 03:13:51.141435 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetSSHKeyPath
	I0116 03:13:51.141604 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetSSHKeyPath
	I0116 03:13:51.141726 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetSSHUsername
	I0116 03:13:51.141913 1011460 main.go:141] libmachine: Using SSH client type: native
	I0116 03:13:51.142238 1011460 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.50.29 22 <nil> <nil>}
	I0116 03:13:51.142267 1011460 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0116 03:13:51.468734 1011460 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0116 03:13:51.468771 1011460 machine.go:91] provisioned docker machine in 974.21023ms
	I0116 03:13:51.468786 1011460 start.go:300] post-start starting for "no-preload-934668" (driver="kvm2")
	I0116 03:13:51.468803 1011460 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0116 03:13:51.468828 1011460 main.go:141] libmachine: (no-preload-934668) Calling .DriverName
	I0116 03:13:51.469200 1011460 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0116 03:13:51.469228 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetSSHHostname
	I0116 03:13:51.472154 1011460 main.go:141] libmachine: (no-preload-934668) DBG | domain no-preload-934668 has defined MAC address 52:54:00:96:89:86 in network mk-no-preload-934668
	I0116 03:13:51.472614 1011460 main.go:141] libmachine: (no-preload-934668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:89:86", ip: ""} in network mk-no-preload-934668: {Iface:virbr2 ExpiryTime:2024-01-16 04:13:45 +0000 UTC Type:0 Mac:52:54:00:96:89:86 Iaid: IPaddr:192.168.50.29 Prefix:24 Hostname:no-preload-934668 Clientid:01:52:54:00:96:89:86}
	I0116 03:13:51.472665 1011460 main.go:141] libmachine: (no-preload-934668) DBG | domain no-preload-934668 has defined IP address 192.168.50.29 and MAC address 52:54:00:96:89:86 in network mk-no-preload-934668
	I0116 03:13:51.472794 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetSSHPort
	I0116 03:13:51.472991 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetSSHKeyPath
	I0116 03:13:51.473167 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetSSHUsername
	I0116 03:13:51.473321 1011460 sshutil.go:53] new ssh client: &{IP:192.168.50.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/no-preload-934668/id_rsa Username:docker}
	I0116 03:13:51.558257 1011460 ssh_runner.go:195] Run: cat /etc/os-release
	I0116 03:13:51.563146 1011460 info.go:137] Remote host: Buildroot 2021.02.12
	I0116 03:13:51.563178 1011460 filesync.go:126] Scanning /home/jenkins/minikube-integration/17967-971255/.minikube/addons for local assets ...
	I0116 03:13:51.563243 1011460 filesync.go:126] Scanning /home/jenkins/minikube-integration/17967-971255/.minikube/files for local assets ...
	I0116 03:13:51.563339 1011460 filesync.go:149] local asset: /home/jenkins/minikube-integration/17967-971255/.minikube/files/etc/ssl/certs/9784822.pem -> 9784822.pem in /etc/ssl/certs
	I0116 03:13:51.563437 1011460 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0116 03:13:51.574145 1011460 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/files/etc/ssl/certs/9784822.pem --> /etc/ssl/certs/9784822.pem (1708 bytes)
	I0116 03:13:51.603071 1011460 start.go:303] post-start completed in 134.264931ms
	I0116 03:13:51.603104 1011460 fix.go:56] fixHost completed within 20.322632188s
	I0116 03:13:51.603128 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetSSHHostname
	I0116 03:13:51.606596 1011460 main.go:141] libmachine: (no-preload-934668) DBG | domain no-preload-934668 has defined MAC address 52:54:00:96:89:86 in network mk-no-preload-934668
	I0116 03:13:51.607040 1011460 main.go:141] libmachine: (no-preload-934668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:89:86", ip: ""} in network mk-no-preload-934668: {Iface:virbr2 ExpiryTime:2024-01-16 04:13:45 +0000 UTC Type:0 Mac:52:54:00:96:89:86 Iaid: IPaddr:192.168.50.29 Prefix:24 Hostname:no-preload-934668 Clientid:01:52:54:00:96:89:86}
	I0116 03:13:51.607094 1011460 main.go:141] libmachine: (no-preload-934668) DBG | domain no-preload-934668 has defined IP address 192.168.50.29 and MAC address 52:54:00:96:89:86 in network mk-no-preload-934668
	I0116 03:13:51.607312 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetSSHPort
	I0116 03:13:51.607554 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetSSHKeyPath
	I0116 03:13:51.607710 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetSSHKeyPath
	I0116 03:13:51.607896 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetSSHUsername
	I0116 03:13:51.608107 1011460 main.go:141] libmachine: Using SSH client type: native
	I0116 03:13:51.608461 1011460 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.50.29 22 <nil> <nil>}
	I0116 03:13:51.608472 1011460 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0116 03:13:51.724098 1011460 main.go:141] libmachine: SSH cmd err, output: <nil>: 1705374831.664998093
	
	I0116 03:13:51.724128 1011460 fix.go:206] guest clock: 1705374831.664998093
	I0116 03:13:51.724137 1011460 fix.go:219] Guest: 2024-01-16 03:13:51.664998093 +0000 UTC Remote: 2024-01-16 03:13:51.60310878 +0000 UTC m=+359.363375393 (delta=61.889313ms)
	I0116 03:13:51.724164 1011460 fix.go:190] guest clock delta is within tolerance: 61.889313ms
	I0116 03:13:51.724171 1011460 start.go:83] releasing machines lock for "no-preload-934668", held for 20.443784472s
	I0116 03:13:51.724202 1011460 main.go:141] libmachine: (no-preload-934668) Calling .DriverName
	I0116 03:13:51.724534 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetIP
	I0116 03:13:51.727999 1011460 main.go:141] libmachine: (no-preload-934668) DBG | domain no-preload-934668 has defined MAC address 52:54:00:96:89:86 in network mk-no-preload-934668
	I0116 03:13:51.728527 1011460 main.go:141] libmachine: (no-preload-934668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:89:86", ip: ""} in network mk-no-preload-934668: {Iface:virbr2 ExpiryTime:2024-01-16 04:13:45 +0000 UTC Type:0 Mac:52:54:00:96:89:86 Iaid: IPaddr:192.168.50.29 Prefix:24 Hostname:no-preload-934668 Clientid:01:52:54:00:96:89:86}
	I0116 03:13:51.728562 1011460 main.go:141] libmachine: (no-preload-934668) DBG | domain no-preload-934668 has defined IP address 192.168.50.29 and MAC address 52:54:00:96:89:86 in network mk-no-preload-934668
	I0116 03:13:51.728809 1011460 main.go:141] libmachine: (no-preload-934668) Calling .DriverName
	I0116 03:13:51.729469 1011460 main.go:141] libmachine: (no-preload-934668) Calling .DriverName
	I0116 03:13:51.729704 1011460 main.go:141] libmachine: (no-preload-934668) Calling .DriverName
	I0116 03:13:51.729819 1011460 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0116 03:13:51.729869 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetSSHHostname
	I0116 03:13:51.729958 1011460 ssh_runner.go:195] Run: cat /version.json
	I0116 03:13:51.729976 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetSSHHostname
	I0116 03:13:51.732965 1011460 main.go:141] libmachine: (no-preload-934668) DBG | domain no-preload-934668 has defined MAC address 52:54:00:96:89:86 in network mk-no-preload-934668
	I0116 03:13:51.733095 1011460 main.go:141] libmachine: (no-preload-934668) DBG | domain no-preload-934668 has defined MAC address 52:54:00:96:89:86 in network mk-no-preload-934668
	I0116 03:13:51.733424 1011460 main.go:141] libmachine: (no-preload-934668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:89:86", ip: ""} in network mk-no-preload-934668: {Iface:virbr2 ExpiryTime:2024-01-16 04:13:45 +0000 UTC Type:0 Mac:52:54:00:96:89:86 Iaid: IPaddr:192.168.50.29 Prefix:24 Hostname:no-preload-934668 Clientid:01:52:54:00:96:89:86}
	I0116 03:13:51.733451 1011460 main.go:141] libmachine: (no-preload-934668) DBG | domain no-preload-934668 has defined IP address 192.168.50.29 and MAC address 52:54:00:96:89:86 in network mk-no-preload-934668
	I0116 03:13:51.733528 1011460 main.go:141] libmachine: (no-preload-934668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:89:86", ip: ""} in network mk-no-preload-934668: {Iface:virbr2 ExpiryTime:2024-01-16 04:13:45 +0000 UTC Type:0 Mac:52:54:00:96:89:86 Iaid: IPaddr:192.168.50.29 Prefix:24 Hostname:no-preload-934668 Clientid:01:52:54:00:96:89:86}
	I0116 03:13:51.733550 1011460 main.go:141] libmachine: (no-preload-934668) DBG | domain no-preload-934668 has defined IP address 192.168.50.29 and MAC address 52:54:00:96:89:86 in network mk-no-preload-934668
	I0116 03:13:51.733591 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetSSHPort
	I0116 03:13:51.733725 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetSSHPort
	I0116 03:13:51.733841 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetSSHKeyPath
	I0116 03:13:51.733972 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetSSHKeyPath
	I0116 03:13:51.733998 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetSSHUsername
	I0116 03:13:51.734170 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetSSHUsername
	I0116 03:13:51.734205 1011460 sshutil.go:53] new ssh client: &{IP:192.168.50.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/no-preload-934668/id_rsa Username:docker}
	I0116 03:13:51.734306 1011460 sshutil.go:53] new ssh client: &{IP:192.168.50.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/no-preload-934668/id_rsa Username:docker}
	I0116 03:13:51.819882 1011460 ssh_runner.go:195] Run: systemctl --version
	I0116 03:13:51.848935 1011460 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0116 03:13:52.005460 1011460 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0116 03:13:52.012691 1011460 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0116 03:13:52.012799 1011460 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0116 03:13:52.031857 1011460 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0116 03:13:52.031884 1011460 start.go:475] detecting cgroup driver to use...
	I0116 03:13:52.031950 1011460 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0116 03:13:52.049305 1011460 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0116 03:13:52.063332 1011460 docker.go:217] disabling cri-docker service (if available) ...
	I0116 03:13:52.063407 1011460 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0116 03:13:52.080341 1011460 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0116 03:13:52.099750 1011460 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0116 03:13:52.241916 1011460 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0116 03:13:52.374908 1011460 docker.go:233] disabling docker service ...
	I0116 03:13:52.375010 1011460 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0116 03:13:52.393531 1011460 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0116 03:13:52.410744 1011460 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0116 03:13:52.545990 1011460 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0116 03:13:52.677872 1011460 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0116 03:13:52.692652 1011460 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0116 03:13:52.711774 1011460 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0116 03:13:52.711871 1011460 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:13:52.722079 1011460 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0116 03:13:52.722179 1011460 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:13:52.732784 1011460 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:13:52.742863 1011460 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:13:52.752987 1011460 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0116 03:13:52.764401 1011460 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0116 03:13:52.773584 1011460 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0116 03:13:52.773668 1011460 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0116 03:13:52.787400 1011460 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0116 03:13:52.798262 1011460 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0116 03:13:52.928159 1011460 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0116 03:13:53.106967 1011460 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0116 03:13:53.107069 1011460 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0116 03:13:53.112312 1011460 start.go:543] Will wait 60s for crictl version
	I0116 03:13:53.112387 1011460 ssh_runner.go:195] Run: which crictl
	I0116 03:13:53.116701 1011460 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0116 03:13:53.166149 1011460 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0116 03:13:53.166246 1011460 ssh_runner.go:195] Run: crio --version
	I0116 03:13:53.227306 1011460 ssh_runner.go:195] Run: crio --version
	I0116 03:13:53.289601 1011460 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.24.1 ...
	I0116 03:13:48.961681 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:13:50.969620 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:13:53.462450 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:13:52.085958 1011681 retry.go:31] will retry after 4.051731251s: kubelet not initialised
	I0116 03:13:50.527883 1011955 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.010858065s)
	I0116 03:13:50.527951 1011955 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:13:50.734058 1011955 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:13:50.824872 1011955 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:13:50.919552 1011955 api_server.go:52] waiting for apiserver process to appear ...
	I0116 03:13:50.919679 1011955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:13:51.420316 1011955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:13:51.920460 1011955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:13:52.419846 1011955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:13:52.920241 1011955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:13:53.419933 1011955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:13:53.920527 1011955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:13:53.948958 1011955 api_server.go:72] duration metric: took 3.029405367s to wait for apiserver process to appear ...
	I0116 03:13:53.948990 1011955 api_server.go:88] waiting for apiserver healthz status ...
	I0116 03:13:53.949018 1011955 api_server.go:253] Checking apiserver healthz at https://192.168.72.158:8444/healthz ...
	I0116 03:13:53.291126 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetIP
	I0116 03:13:53.294326 1011460 main.go:141] libmachine: (no-preload-934668) DBG | domain no-preload-934668 has defined MAC address 52:54:00:96:89:86 in network mk-no-preload-934668
	I0116 03:13:53.294780 1011460 main.go:141] libmachine: (no-preload-934668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:89:86", ip: ""} in network mk-no-preload-934668: {Iface:virbr2 ExpiryTime:2024-01-16 04:13:45 +0000 UTC Type:0 Mac:52:54:00:96:89:86 Iaid: IPaddr:192.168.50.29 Prefix:24 Hostname:no-preload-934668 Clientid:01:52:54:00:96:89:86}
	I0116 03:13:53.294833 1011460 main.go:141] libmachine: (no-preload-934668) DBG | domain no-preload-934668 has defined IP address 192.168.50.29 and MAC address 52:54:00:96:89:86 in network mk-no-preload-934668
	I0116 03:13:53.295093 1011460 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0116 03:13:53.300971 1011460 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 03:13:53.316040 1011460 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0116 03:13:53.316107 1011460 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 03:13:53.368111 1011460 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0116 03:13:53.368138 1011460 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.2 registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 registry.k8s.io/kube-scheduler:v1.29.0-rc.2 registry.k8s.io/kube-proxy:v1.29.0-rc.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0116 03:13:53.368196 1011460 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 03:13:53.368485 1011460 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0116 03:13:53.368569 1011460 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I0116 03:13:53.368584 1011460 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0116 03:13:53.368596 1011460 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0116 03:13:53.368607 1011460 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0116 03:13:53.368626 1011460 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0116 03:13:53.368669 1011460 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0116 03:13:53.370675 1011460 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I0116 03:13:53.370735 1011460 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0116 03:13:53.371123 1011460 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0116 03:13:53.371132 1011460 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0116 03:13:53.371191 1011460 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0116 03:13:53.371333 1011460 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0116 03:13:53.371456 1011460 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 03:13:53.371815 1011460 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0116 03:13:53.515854 1011460 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I0116 03:13:53.524922 1011460 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0116 03:13:53.531697 1011460 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0116 03:13:53.540206 1011460 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0116 03:13:53.543219 1011460 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0116 03:13:53.546913 1011460 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0116 03:13:53.580609 1011460 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0116 03:13:53.610214 1011460 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I0116 03:13:53.610281 1011460 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I0116 03:13:53.610353 1011460 ssh_runner.go:195] Run: which crictl
	I0116 03:13:53.677663 1011460 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 03:13:53.687535 1011460 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" does not exist at hash "bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f" in container runtime
	I0116 03:13:53.687595 1011460 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0116 03:13:53.687599 1011460 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" does not exist at hash "4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210" in container runtime
	I0116 03:13:53.687638 1011460 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0116 03:13:53.687667 1011460 ssh_runner.go:195] Run: which crictl
	I0116 03:13:53.687717 1011460 ssh_runner.go:195] Run: which crictl
	I0116 03:13:53.862729 1011460 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.2" does not exist at hash "cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834" in container runtime
	I0116 03:13:53.862804 1011460 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0116 03:13:53.862830 1011460 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" does not exist at hash "d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d" in container runtime
	I0116 03:13:53.862929 1011460 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0116 03:13:53.863101 1011460 ssh_runner.go:195] Run: which crictl
	I0116 03:13:53.863151 1011460 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0116 03:13:53.862947 1011460 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0116 03:13:53.863216 1011460 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0116 03:13:53.863098 1011460 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0116 03:13:53.863245 1011460 ssh_runner.go:195] Run: which crictl
	I0116 03:13:53.863264 1011460 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 03:13:53.862873 1011460 ssh_runner.go:195] Run: which crictl
	I0116 03:13:53.863311 1011460 ssh_runner.go:195] Run: which crictl
	I0116 03:13:53.863060 1011460 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I0116 03:13:53.863156 1011460 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0116 03:13:53.928805 1011460 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0116 03:13:53.968913 1011460 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17967-971255/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I0116 03:13:53.969132 1011460 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I0116 03:13:53.974631 1011460 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17967-971255/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I0116 03:13:53.974701 1011460 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17967-971255/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I0116 03:13:53.974754 1011460 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0116 03:13:53.974928 1011460 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0116 03:13:53.974792 1011460 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0116 03:13:53.974818 1011460 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0116 03:13:53.974833 1011460 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 03:13:54.018085 1011460 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17967-971255/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I0116 03:13:54.018198 1011460 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0116 03:13:54.018288 1011460 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I0116 03:13:54.018300 1011460 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I0116 03:13:54.018326 1011460 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I0116 03:13:54.086983 1011460 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17967-971255/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0116 03:13:54.087041 1011460 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17967-971255/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0116 03:13:54.087074 1011460 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17967-971255/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I0116 03:13:54.087111 1011460 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0116 03:13:54.087147 1011460 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0116 03:13:54.087148 1011460 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0116 03:13:54.087203 1011460 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2 (exists)
	I0116 03:13:54.087245 1011460 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2 (exists)
	I0116 03:13:55.466435 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:13:57.968591 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:13:57.859025 1011955 api_server.go:279] https://192.168.72.158:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0116 03:13:57.859081 1011955 api_server.go:103] status: https://192.168.72.158:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0116 03:13:57.859100 1011955 api_server.go:253] Checking apiserver healthz at https://192.168.72.158:8444/healthz ...
	I0116 03:13:57.949519 1011955 api_server.go:279] https://192.168.72.158:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0116 03:13:57.949575 1011955 api_server.go:103] status: https://192.168.72.158:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0116 03:13:57.949623 1011955 api_server.go:253] Checking apiserver healthz at https://192.168.72.158:8444/healthz ...
	I0116 03:13:57.965508 1011955 api_server.go:279] https://192.168.72.158:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0116 03:13:57.965553 1011955 api_server.go:103] status: https://192.168.72.158:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0116 03:13:58.449680 1011955 api_server.go:253] Checking apiserver healthz at https://192.168.72.158:8444/healthz ...
	I0116 03:13:58.456250 1011955 api_server.go:279] https://192.168.72.158:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0116 03:13:58.456292 1011955 api_server.go:103] status: https://192.168.72.158:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0116 03:13:58.950052 1011955 api_server.go:253] Checking apiserver healthz at https://192.168.72.158:8444/healthz ...
	I0116 03:13:58.962965 1011955 api_server.go:279] https://192.168.72.158:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0116 03:13:58.963019 1011955 api_server.go:103] status: https://192.168.72.158:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0116 03:13:59.449560 1011955 api_server.go:253] Checking apiserver healthz at https://192.168.72.158:8444/healthz ...
	I0116 03:13:59.457086 1011955 api_server.go:279] https://192.168.72.158:8444/healthz returned 200:
	ok
	I0116 03:13:59.469254 1011955 api_server.go:141] control plane version: v1.28.4
	I0116 03:13:59.469294 1011955 api_server.go:131] duration metric: took 5.520295477s to wait for apiserver health ...
	I0116 03:13:59.469308 1011955 cni.go:84] Creating CNI manager for ""
	I0116 03:13:59.469316 1011955 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 03:13:59.471524 1011955 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0116 03:13:56.143871 1011681 retry.go:31] will retry after 12.777471538s: kubelet not initialised
	I0116 03:13:59.472896 1011955 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0116 03:13:59.486944 1011955 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0116 03:13:59.511553 1011955 system_pods.go:43] waiting for kube-system pods to appear ...
	I0116 03:13:59.530287 1011955 system_pods.go:59] 8 kube-system pods found
	I0116 03:13:59.530357 1011955 system_pods.go:61] "coredns-5dd5756b68-z7b9d" [735c028e-f6a8-4a96-a615-95befe445a97] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0116 03:13:59.530374 1011955 system_pods.go:61] "etcd-default-k8s-diff-port-775571" [3e321076-74dd-49a8-b078-4f63505b5783] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0116 03:13:59.530391 1011955 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-775571" [07f01ea4-0317-4d3d-a03c-7c1756a5746c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0116 03:13:59.530409 1011955 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-775571" [5d4f4ee1-1f7c-4dfc-8c85-daca7a2d9fc0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0116 03:13:59.530428 1011955 system_pods.go:61] "kube-proxy-lntj2" [946acb12-217d-42e6-bcfc-37dca684b638] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0116 03:13:59.530437 1011955 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-775571" [6b278ad1-d59e-4b81-a4ec-cde1b643bb90] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0116 03:13:59.530449 1011955 system_pods.go:61] "metrics-server-57f55c9bc5-9bsqm" [ef0830b9-7e34-4aab-a1a6-8f91881b6934] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:13:59.530460 1011955 system_pods.go:61] "storage-provisioner" [8b20335e-7293-48bd-99f6-987cd95a0dc2] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0116 03:13:59.530474 1011955 system_pods.go:74] duration metric: took 18.829356ms to wait for pod list to return data ...
	I0116 03:13:59.530483 1011955 node_conditions.go:102] verifying NodePressure condition ...
	I0116 03:13:59.535596 1011955 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 03:13:59.535637 1011955 node_conditions.go:123] node cpu capacity is 2
	I0116 03:13:59.535651 1011955 node_conditions.go:105] duration metric: took 5.161567ms to run NodePressure ...
	I0116 03:13:59.535675 1011955 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:14:00.026516 1011955 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0116 03:14:00.035093 1011955 kubeadm.go:787] kubelet initialised
	I0116 03:14:00.035126 1011955 kubeadm.go:788] duration metric: took 8.522284ms waiting for restarted kubelet to initialise ...
	I0116 03:14:00.035137 1011955 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:14:00.067410 1011955 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-z7b9d" in "kube-system" namespace to be "Ready" ...
	I0116 03:13:58.094229 1011460 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (4.076000974s)
	I0116 03:13:58.094289 1011460 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (4.075931984s)
	I0116 03:13:58.094310 1011460 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2 (exists)
	I0116 03:13:58.094313 1011460 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17967-971255/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I0116 03:13:58.094331 1011460 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (4.007198419s)
	I0116 03:13:58.094353 1011460 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0116 03:13:58.094364 1011460 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0116 03:13:58.094367 1011460 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1: (4.007202527s)
	I0116 03:13:58.094384 1011460 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0116 03:13:58.094406 1011460 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (4.007194547s)
	I0116 03:13:58.094462 1011460 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2 (exists)
	I0116 03:13:58.094412 1011460 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0116 03:14:01.772635 1011460 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (3.678136161s)
	I0116 03:14:01.772673 1011460 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17967-971255/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 from cache
	I0116 03:14:01.772705 1011460 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0116 03:14:01.772758 1011460 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0116 03:14:00.463370 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:02.471583 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:02.075650 1011955 pod_ready.go:102] pod "coredns-5dd5756b68-z7b9d" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:04.077051 1011955 pod_ready.go:102] pod "coredns-5dd5756b68-z7b9d" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:04.575569 1011955 pod_ready.go:92] pod "coredns-5dd5756b68-z7b9d" in "kube-system" namespace has status "Ready":"True"
	I0116 03:14:04.575601 1011955 pod_ready.go:81] duration metric: took 4.508014187s waiting for pod "coredns-5dd5756b68-z7b9d" in "kube-system" namespace to be "Ready" ...
	I0116 03:14:04.575613 1011955 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-775571" in "kube-system" namespace to be "Ready" ...
	I0116 03:14:03.238654 1011460 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (1.465862156s)
	I0116 03:14:03.238716 1011460 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17967-971255/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 from cache
	I0116 03:14:03.238745 1011460 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0116 03:14:03.238799 1011460 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0116 03:14:05.517213 1011460 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (2.278362381s)
	I0116 03:14:05.517256 1011460 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17967-971255/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 from cache
	I0116 03:14:05.517290 1011460 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0116 03:14:05.517354 1011460 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0116 03:14:06.265419 1011460 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17967-971255/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0116 03:14:06.265468 1011460 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0116 03:14:06.265522 1011460 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0116 03:14:04.544905 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:06.964607 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:08.928050 1011681 retry.go:31] will retry after 7.799067246s: kubelet not initialised
	I0116 03:14:06.583214 1011955 pod_ready.go:102] pod "etcd-default-k8s-diff-port-775571" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:08.584517 1011955 pod_ready.go:102] pod "etcd-default-k8s-diff-port-775571" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:08.427431 1011460 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.161882333s)
	I0116 03:14:08.427460 1011460 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17967-971255/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0116 03:14:08.427485 1011460 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0116 03:14:08.427533 1011460 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0116 03:14:10.992767 1011460 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (2.565203793s)
	I0116 03:14:10.992809 1011460 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17967-971255/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 from cache
	I0116 03:14:10.992842 1011460 cache_images.go:123] Successfully loaded all cached images
	I0116 03:14:10.992849 1011460 cache_images.go:92] LoadImages completed in 17.624696262s
	I0116 03:14:10.992918 1011460 ssh_runner.go:195] Run: crio config
	I0116 03:14:11.057517 1011460 cni.go:84] Creating CNI manager for ""
	I0116 03:14:11.057552 1011460 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 03:14:11.057583 1011460 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0116 03:14:11.057614 1011460 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.29 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-934668 NodeName:no-preload-934668 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.29"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.29 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0116 03:14:11.057793 1011460 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.29
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-934668"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.29
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.29"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0116 03:14:11.057907 1011460 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-934668 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.29
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-934668 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0116 03:14:11.057969 1011460 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0116 03:14:11.070793 1011460 binaries.go:44] Found k8s binaries, skipping transfer
	I0116 03:14:11.070892 1011460 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0116 03:14:11.082832 1011460 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (381 bytes)
	I0116 03:14:11.103800 1011460 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0116 03:14:11.121508 1011460 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2106 bytes)
	I0116 03:14:11.139941 1011460 ssh_runner.go:195] Run: grep 192.168.50.29	control-plane.minikube.internal$ /etc/hosts
	I0116 03:14:11.144648 1011460 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.29	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 03:14:11.160034 1011460 certs.go:56] Setting up /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/no-preload-934668 for IP: 192.168.50.29
	I0116 03:14:11.160079 1011460 certs.go:190] acquiring lock for shared ca certs: {Name:mk7c5920652340a030ac1e36e0fe21cc8437c491 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:14:11.160310 1011460 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17967-971255/.minikube/ca.key
	I0116 03:14:11.160371 1011460 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17967-971255/.minikube/proxy-client-ca.key
	I0116 03:14:11.160469 1011460 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/no-preload-934668/client.key
	I0116 03:14:11.160562 1011460 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/no-preload-934668/apiserver.key.1326a2fe
	I0116 03:14:11.160631 1011460 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/no-preload-934668/proxy-client.key
	I0116 03:14:11.160780 1011460 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/home/jenkins/minikube-integration/17967-971255/.minikube/certs/978482.pem (1338 bytes)
	W0116 03:14:11.160861 1011460 certs.go:433] ignoring /home/jenkins/minikube-integration/17967-971255/.minikube/certs/home/jenkins/minikube-integration/17967-971255/.minikube/certs/978482_empty.pem, impossibly tiny 0 bytes
	I0116 03:14:11.160887 1011460 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca-key.pem (1675 bytes)
	I0116 03:14:11.160927 1011460 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca.pem (1082 bytes)
	I0116 03:14:11.160976 1011460 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/home/jenkins/minikube-integration/17967-971255/.minikube/certs/cert.pem (1123 bytes)
	I0116 03:14:11.161008 1011460 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/home/jenkins/minikube-integration/17967-971255/.minikube/certs/key.pem (1675 bytes)
	I0116 03:14:11.161070 1011460 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-971255/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17967-971255/.minikube/files/etc/ssl/certs/9784822.pem (1708 bytes)
	I0116 03:14:11.161922 1011460 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/no-preload-934668/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0116 03:14:11.192041 1011460 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/no-preload-934668/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0116 03:14:11.217326 1011460 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/no-preload-934668/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0116 03:14:11.243091 1011460 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/no-preload-934668/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0116 03:14:11.268536 1011460 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0116 03:14:11.291985 1011460 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0116 03:14:11.317943 1011460 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0116 03:14:11.343359 1011460 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0116 03:14:11.368837 1011460 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/files/etc/ssl/certs/9784822.pem --> /usr/share/ca-certificates/9784822.pem (1708 bytes)
	I0116 03:14:11.392907 1011460 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0116 03:14:11.417266 1011460 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/certs/978482.pem --> /usr/share/ca-certificates/978482.pem (1338 bytes)
	I0116 03:14:11.441365 1011460 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0116 03:14:11.459961 1011460 ssh_runner.go:195] Run: openssl version
	I0116 03:14:11.466850 1011460 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9784822.pem && ln -fs /usr/share/ca-certificates/9784822.pem /etc/ssl/certs/9784822.pem"
	I0116 03:14:11.477985 1011460 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9784822.pem
	I0116 03:14:11.483233 1011460 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 16 02:10 /usr/share/ca-certificates/9784822.pem
	I0116 03:14:11.483296 1011460 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9784822.pem
	I0116 03:14:11.489111 1011460 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9784822.pem /etc/ssl/certs/3ec20f2e.0"
	I0116 03:14:11.500499 1011460 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0116 03:14:11.511988 1011460 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:14:11.517205 1011460 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 16 02:01 /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:14:11.517300 1011460 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:14:11.523361 1011460 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0116 03:14:11.536305 1011460 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/978482.pem && ln -fs /usr/share/ca-certificates/978482.pem /etc/ssl/certs/978482.pem"
	I0116 03:14:11.549308 1011460 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/978482.pem
	I0116 03:14:11.554540 1011460 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 16 02:10 /usr/share/ca-certificates/978482.pem
	I0116 03:14:11.554632 1011460 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/978482.pem
	I0116 03:14:11.560816 1011460 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/978482.pem /etc/ssl/certs/51391683.0"
	I0116 03:14:11.573145 1011460 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0116 03:14:11.578678 1011460 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0116 03:14:11.586807 1011460 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0116 03:14:11.593146 1011460 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0116 03:14:11.599812 1011460 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0116 03:14:11.606216 1011460 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0116 03:14:11.612827 1011460 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0116 03:14:11.619060 1011460 kubeadm.go:404] StartCluster: {Name:no-preload-934668 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.29.0-rc.2 ClusterName:no-preload-934668 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.29 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 03:14:11.619201 1011460 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0116 03:14:11.619271 1011460 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 03:14:11.661293 1011460 cri.go:89] found id: ""
	I0116 03:14:11.661390 1011460 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0116 03:14:11.672886 1011460 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0116 03:14:11.672921 1011460 kubeadm.go:636] restartCluster start
	I0116 03:14:11.672998 1011460 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0116 03:14:11.683692 1011460 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:14:11.684896 1011460 kubeconfig.go:92] found "no-preload-934668" server: "https://192.168.50.29:8443"
	I0116 03:14:11.687623 1011460 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0116 03:14:11.698887 1011460 api_server.go:166] Checking apiserver status ...
	I0116 03:14:11.698967 1011460 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:14:11.711969 1011460 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:14:12.199181 1011460 api_server.go:166] Checking apiserver status ...
	I0116 03:14:12.199277 1011460 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:14:12.213324 1011460 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:14:09.463196 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:11.464458 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:13.466325 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:10.585205 1011955 pod_ready.go:102] pod "etcd-default-k8s-diff-port-775571" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:12.585027 1011955 pod_ready.go:92] pod "etcd-default-k8s-diff-port-775571" in "kube-system" namespace has status "Ready":"True"
	I0116 03:14:12.585060 1011955 pod_ready.go:81] duration metric: took 8.009439483s waiting for pod "etcd-default-k8s-diff-port-775571" in "kube-system" namespace to be "Ready" ...
	I0116 03:14:12.585074 1011955 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-775571" in "kube-system" namespace to be "Ready" ...
	I0116 03:14:12.592172 1011955 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-775571" in "kube-system" namespace has status "Ready":"True"
	I0116 03:14:12.592208 1011955 pod_ready.go:81] duration metric: took 7.125355ms waiting for pod "kube-apiserver-default-k8s-diff-port-775571" in "kube-system" namespace to be "Ready" ...
	I0116 03:14:12.592224 1011955 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-775571" in "kube-system" namespace to be "Ready" ...
	I0116 03:14:12.600113 1011955 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-775571" in "kube-system" namespace has status "Ready":"True"
	I0116 03:14:12.600141 1011955 pod_ready.go:81] duration metric: took 7.90138ms waiting for pod "kube-controller-manager-default-k8s-diff-port-775571" in "kube-system" namespace to be "Ready" ...
	I0116 03:14:12.600152 1011955 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-lntj2" in "kube-system" namespace to be "Ready" ...
	I0116 03:14:12.606813 1011955 pod_ready.go:92] pod "kube-proxy-lntj2" in "kube-system" namespace has status "Ready":"True"
	I0116 03:14:12.606843 1011955 pod_ready.go:81] duration metric: took 6.6848ms waiting for pod "kube-proxy-lntj2" in "kube-system" namespace to be "Ready" ...
	I0116 03:14:12.606852 1011955 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-775571" in "kube-system" namespace to be "Ready" ...
	I0116 03:14:14.115221 1011955 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-775571" in "kube-system" namespace has status "Ready":"True"
	I0116 03:14:14.115256 1011955 pod_ready.go:81] duration metric: took 1.508396572s waiting for pod "kube-scheduler-default-k8s-diff-port-775571" in "kube-system" namespace to be "Ready" ...
	I0116 03:14:14.115272 1011955 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace to be "Ready" ...
	I0116 03:14:12.699849 1011460 api_server.go:166] Checking apiserver status ...
	I0116 03:14:12.700002 1011460 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:14:12.713330 1011460 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:14:13.199827 1011460 api_server.go:166] Checking apiserver status ...
	I0116 03:14:13.199938 1011460 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:14:13.212593 1011460 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:14:13.699177 1011460 api_server.go:166] Checking apiserver status ...
	I0116 03:14:13.699280 1011460 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:14:13.713754 1011460 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:14:14.199293 1011460 api_server.go:166] Checking apiserver status ...
	I0116 03:14:14.199387 1011460 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:14:14.211364 1011460 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:14:14.699976 1011460 api_server.go:166] Checking apiserver status ...
	I0116 03:14:14.700082 1011460 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:14:14.713420 1011460 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:14:15.198943 1011460 api_server.go:166] Checking apiserver status ...
	I0116 03:14:15.199056 1011460 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:14:15.211474 1011460 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:14:15.699723 1011460 api_server.go:166] Checking apiserver status ...
	I0116 03:14:15.699858 1011460 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:14:15.711566 1011460 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:14:16.199077 1011460 api_server.go:166] Checking apiserver status ...
	I0116 03:14:16.199195 1011460 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:14:16.210174 1011460 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:14:16.699188 1011460 api_server.go:166] Checking apiserver status ...
	I0116 03:14:16.699296 1011460 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:14:16.710971 1011460 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:14:17.199584 1011460 api_server.go:166] Checking apiserver status ...
	I0116 03:14:17.199733 1011460 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:14:17.211935 1011460 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:14:15.964130 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:18.463789 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:16.731737 1011681 kubeadm.go:787] kubelet initialised
	I0116 03:14:16.731763 1011681 kubeadm.go:788] duration metric: took 34.810672543s waiting for restarted kubelet to initialise ...
	I0116 03:14:16.731771 1011681 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:14:16.736630 1011681 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-5j7ps" in "kube-system" namespace to be "Ready" ...
	I0116 03:14:16.742482 1011681 pod_ready.go:92] pod "coredns-5644d7b6d9-5j7ps" in "kube-system" namespace has status "Ready":"True"
	I0116 03:14:16.742513 1011681 pod_ready.go:81] duration metric: took 5.851753ms waiting for pod "coredns-5644d7b6d9-5j7ps" in "kube-system" namespace to be "Ready" ...
	I0116 03:14:16.742524 1011681 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-dfsf5" in "kube-system" namespace to be "Ready" ...
	I0116 03:14:16.747113 1011681 pod_ready.go:92] pod "coredns-5644d7b6d9-dfsf5" in "kube-system" namespace has status "Ready":"True"
	I0116 03:14:16.747137 1011681 pod_ready.go:81] duration metric: took 4.606585ms waiting for pod "coredns-5644d7b6d9-dfsf5" in "kube-system" namespace to be "Ready" ...
	I0116 03:14:16.747146 1011681 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-788237" in "kube-system" namespace to be "Ready" ...
	I0116 03:14:16.752744 1011681 pod_ready.go:92] pod "etcd-old-k8s-version-788237" in "kube-system" namespace has status "Ready":"True"
	I0116 03:14:16.752780 1011681 pod_ready.go:81] duration metric: took 5.626197ms waiting for pod "etcd-old-k8s-version-788237" in "kube-system" namespace to be "Ready" ...
	I0116 03:14:16.752794 1011681 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-788237" in "kube-system" namespace to be "Ready" ...
	I0116 03:14:16.757419 1011681 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-788237" in "kube-system" namespace has status "Ready":"True"
	I0116 03:14:16.757453 1011681 pod_ready.go:81] duration metric: took 4.649381ms waiting for pod "kube-apiserver-old-k8s-version-788237" in "kube-system" namespace to be "Ready" ...
	I0116 03:14:16.757468 1011681 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-788237" in "kube-system" namespace to be "Ready" ...
	I0116 03:14:17.131588 1011681 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-788237" in "kube-system" namespace has status "Ready":"True"
	I0116 03:14:17.131616 1011681 pod_ready.go:81] duration metric: took 374.139932ms waiting for pod "kube-controller-manager-old-k8s-version-788237" in "kube-system" namespace to be "Ready" ...
	I0116 03:14:17.131626 1011681 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-vtxjk" in "kube-system" namespace to be "Ready" ...
	I0116 03:14:17.531570 1011681 pod_ready.go:92] pod "kube-proxy-vtxjk" in "kube-system" namespace has status "Ready":"True"
	I0116 03:14:17.531610 1011681 pod_ready.go:81] duration metric: took 399.976074ms waiting for pod "kube-proxy-vtxjk" in "kube-system" namespace to be "Ready" ...
	I0116 03:14:17.531625 1011681 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-788237" in "kube-system" namespace to be "Ready" ...
	I0116 03:14:17.931792 1011681 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-788237" in "kube-system" namespace has status "Ready":"True"
	I0116 03:14:17.931820 1011681 pod_ready.go:81] duration metric: took 400.186985ms waiting for pod "kube-scheduler-old-k8s-version-788237" in "kube-system" namespace to be "Ready" ...
	I0116 03:14:17.931832 1011681 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace to be "Ready" ...
	I0116 03:14:19.939055 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:16.125560 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:18.624277 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:17.699246 1011460 api_server.go:166] Checking apiserver status ...
	I0116 03:14:17.699353 1011460 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:14:17.712025 1011460 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:14:18.199655 1011460 api_server.go:166] Checking apiserver status ...
	I0116 03:14:18.199784 1011460 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:14:18.212198 1011460 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:14:18.699816 1011460 api_server.go:166] Checking apiserver status ...
	I0116 03:14:18.699906 1011460 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:14:18.713019 1011460 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:14:19.199601 1011460 api_server.go:166] Checking apiserver status ...
	I0116 03:14:19.199706 1011460 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:14:19.211380 1011460 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:14:19.698919 1011460 api_server.go:166] Checking apiserver status ...
	I0116 03:14:19.699010 1011460 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:14:19.711001 1011460 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:14:20.199588 1011460 api_server.go:166] Checking apiserver status ...
	I0116 03:14:20.199694 1011460 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:14:20.211824 1011460 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:14:20.699345 1011460 api_server.go:166] Checking apiserver status ...
	I0116 03:14:20.699455 1011460 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:14:20.711489 1011460 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:14:21.199006 1011460 api_server.go:166] Checking apiserver status ...
	I0116 03:14:21.199111 1011460 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:14:21.210606 1011460 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:14:21.699928 1011460 api_server.go:166] Checking apiserver status ...
	I0116 03:14:21.700036 1011460 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:14:21.712086 1011460 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:14:21.712119 1011460 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0116 03:14:21.712128 1011460 kubeadm.go:1135] stopping kube-system containers ...
	I0116 03:14:21.712140 1011460 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0116 03:14:21.712220 1011460 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 03:14:21.754523 1011460 cri.go:89] found id: ""
	I0116 03:14:21.754644 1011460 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0116 03:14:21.770459 1011460 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0116 03:14:21.781022 1011460 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0116 03:14:21.781090 1011460 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0116 03:14:21.790780 1011460 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0116 03:14:21.790817 1011460 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:14:21.928434 1011460 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:14:20.962684 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:23.464521 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:21.941218 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:24.440549 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:21.123377 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:23.622729 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:22.965238 1011460 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.036762464s)
	I0116 03:14:22.965272 1011460 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:14:23.176590 1011460 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:14:23.273101 1011460 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:14:23.360976 1011460 api_server.go:52] waiting for apiserver process to appear ...
	I0116 03:14:23.361080 1011460 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:14:23.861957 1011460 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:14:24.361978 1011460 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:14:24.861204 1011460 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:14:25.361957 1011460 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:14:25.861277 1011460 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:14:25.884677 1011460 api_server.go:72] duration metric: took 2.523698355s to wait for apiserver process to appear ...
	I0116 03:14:25.884716 1011460 api_server.go:88] waiting for apiserver healthz status ...
	I0116 03:14:25.884742 1011460 api_server.go:253] Checking apiserver healthz at https://192.168.50.29:8443/healthz ...
	I0116 03:14:25.885342 1011460 api_server.go:269] stopped: https://192.168.50.29:8443/healthz: Get "https://192.168.50.29:8443/healthz": dial tcp 192.168.50.29:8443: connect: connection refused
	I0116 03:14:26.385713 1011460 api_server.go:253] Checking apiserver healthz at https://192.168.50.29:8443/healthz ...
	I0116 03:14:25.963386 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:28.463102 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:26.941545 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:29.439950 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:25.624030 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:27.624836 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:30.125387 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:30.121267 1011460 api_server.go:279] https://192.168.50.29:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0116 03:14:30.121300 1011460 api_server.go:103] status: https://192.168.50.29:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0116 03:14:30.121319 1011460 api_server.go:253] Checking apiserver healthz at https://192.168.50.29:8443/healthz ...
	I0116 03:14:30.224826 1011460 api_server.go:279] https://192.168.50.29:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0116 03:14:30.224860 1011460 api_server.go:103] status: https://192.168.50.29:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0116 03:14:30.385083 1011460 api_server.go:253] Checking apiserver healthz at https://192.168.50.29:8443/healthz ...
	I0116 03:14:30.392851 1011460 api_server.go:279] https://192.168.50.29:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0116 03:14:30.392896 1011460 api_server.go:103] status: https://192.168.50.29:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0116 03:14:30.885620 1011460 api_server.go:253] Checking apiserver healthz at https://192.168.50.29:8443/healthz ...
	I0116 03:14:30.891094 1011460 api_server.go:279] https://192.168.50.29:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0116 03:14:30.891136 1011460 api_server.go:103] status: https://192.168.50.29:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0116 03:14:31.385130 1011460 api_server.go:253] Checking apiserver healthz at https://192.168.50.29:8443/healthz ...
	I0116 03:14:31.399561 1011460 api_server.go:279] https://192.168.50.29:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0116 03:14:31.399594 1011460 api_server.go:103] status: https://192.168.50.29:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0116 03:14:31.885471 1011460 api_server.go:253] Checking apiserver healthz at https://192.168.50.29:8443/healthz ...
	I0116 03:14:31.890676 1011460 api_server.go:279] https://192.168.50.29:8443/healthz returned 200:
	ok
	I0116 03:14:31.900046 1011460 api_server.go:141] control plane version: v1.29.0-rc.2
	I0116 03:14:31.900079 1011460 api_server.go:131] duration metric: took 6.015355459s to wait for apiserver health ...
	I0116 03:14:31.900104 1011460 cni.go:84] Creating CNI manager for ""
	I0116 03:14:31.900111 1011460 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 03:14:31.902248 1011460 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0116 03:14:31.903832 1011460 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0116 03:14:31.920161 1011460 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0116 03:14:31.946401 1011460 system_pods.go:43] waiting for kube-system pods to appear ...
	I0116 03:14:31.957546 1011460 system_pods.go:59] 8 kube-system pods found
	I0116 03:14:31.957594 1011460 system_pods.go:61] "coredns-76f75df574-j55q6" [b8775751-87dd-4a05-8c84-05c09c947102] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0116 03:14:31.957605 1011460 system_pods.go:61] "etcd-no-preload-934668" [3ce80d11-c902-4c1d-9e2d-a65fed4d33c3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0116 03:14:31.957618 1011460 system_pods.go:61] "kube-apiserver-no-preload-934668" [3636a336-1ff1-4482-bf8c-559f8ae04f40] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0116 03:14:31.957627 1011460 system_pods.go:61] "kube-controller-manager-no-preload-934668" [71bdeebc-ac26-43ca-bffe-0e8e97293d5f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0116 03:14:31.957635 1011460 system_pods.go:61] "kube-proxy-c56bl" [d57e14d7-5e87-469f-8819-2749b2f7b54f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0116 03:14:31.957650 1011460 system_pods.go:61] "kube-scheduler-no-preload-934668" [10c61a29-dda4-4975-b290-a337e67070e0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0116 03:14:31.957665 1011460 system_pods.go:61] "metrics-server-57f55c9bc5-lgmnp" [36a9cbc0-7644-421c-ab26-7262a295ea66] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:14:31.957677 1011460 system_pods.go:61] "storage-provisioner" [c35e3af3-b48e-4184-8c06-2bd5bbbc399e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0116 03:14:31.957688 1011460 system_pods.go:74] duration metric: took 11.2629ms to wait for pod list to return data ...
	I0116 03:14:31.957703 1011460 node_conditions.go:102] verifying NodePressure condition ...
	I0116 03:14:31.963828 1011460 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 03:14:31.963860 1011460 node_conditions.go:123] node cpu capacity is 2
	I0116 03:14:31.963871 1011460 node_conditions.go:105] duration metric: took 6.162948ms to run NodePressure ...
	I0116 03:14:31.963894 1011460 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:14:32.261460 1011460 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0116 03:14:32.268148 1011460 kubeadm.go:787] kubelet initialised
	I0116 03:14:32.268181 1011460 kubeadm.go:788] duration metric: took 6.679075ms waiting for restarted kubelet to initialise ...
	I0116 03:14:32.268197 1011460 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:14:32.273936 1011460 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-j55q6" in "kube-system" namespace to be "Ready" ...
	I0116 03:14:30.468482 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:32.967755 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:31.940340 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:34.440944 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:32.624635 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:35.124816 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:34.282691 1011460 pod_ready.go:102] pod "coredns-76f75df574-j55q6" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:35.787066 1011460 pod_ready.go:92] pod "coredns-76f75df574-j55q6" in "kube-system" namespace has status "Ready":"True"
	I0116 03:14:35.787097 1011460 pod_ready.go:81] duration metric: took 3.513129426s waiting for pod "coredns-76f75df574-j55q6" in "kube-system" namespace to be "Ready" ...
	I0116 03:14:35.787112 1011460 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-934668" in "kube-system" namespace to be "Ready" ...
	I0116 03:14:35.463919 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:37.963533 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:36.939219 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:38.939377 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:37.128157 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:39.623730 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:37.798112 1011460 pod_ready.go:102] pod "etcd-no-preload-934668" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:39.794453 1011460 pod_ready.go:92] pod "etcd-no-preload-934668" in "kube-system" namespace has status "Ready":"True"
	I0116 03:14:39.794486 1011460 pod_ready.go:81] duration metric: took 4.007365728s waiting for pod "etcd-no-preload-934668" in "kube-system" namespace to be "Ready" ...
	I0116 03:14:39.794496 1011460 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-934668" in "kube-system" namespace to be "Ready" ...
	I0116 03:14:39.799569 1011460 pod_ready.go:92] pod "kube-apiserver-no-preload-934668" in "kube-system" namespace has status "Ready":"True"
	I0116 03:14:39.799593 1011460 pod_ready.go:81] duration metric: took 5.090956ms waiting for pod "kube-apiserver-no-preload-934668" in "kube-system" namespace to be "Ready" ...
	I0116 03:14:39.799602 1011460 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-934668" in "kube-system" namespace to be "Ready" ...
	I0116 03:14:40.309705 1011460 pod_ready.go:92] pod "kube-controller-manager-no-preload-934668" in "kube-system" namespace has status "Ready":"True"
	I0116 03:14:40.309748 1011460 pod_ready.go:81] duration metric: took 510.137584ms waiting for pod "kube-controller-manager-no-preload-934668" in "kube-system" namespace to be "Ready" ...
	I0116 03:14:40.309761 1011460 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-c56bl" in "kube-system" namespace to be "Ready" ...
	I0116 03:14:40.315446 1011460 pod_ready.go:92] pod "kube-proxy-c56bl" in "kube-system" namespace has status "Ready":"True"
	I0116 03:14:40.315480 1011460 pod_ready.go:81] duration metric: took 5.710622ms waiting for pod "kube-proxy-c56bl" in "kube-system" namespace to be "Ready" ...
	I0116 03:14:40.315494 1011460 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-934668" in "kube-system" namespace to be "Ready" ...
	I0116 03:14:40.467180 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:42.964593 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:40.940105 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:43.440135 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:41.623831 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:44.128608 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:42.324063 1011460 pod_ready.go:102] pod "kube-scheduler-no-preload-934668" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:44.325488 1011460 pod_ready.go:102] pod "kube-scheduler-no-preload-934668" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:44.823767 1011460 pod_ready.go:92] pod "kube-scheduler-no-preload-934668" in "kube-system" namespace has status "Ready":"True"
	I0116 03:14:44.823802 1011460 pod_ready.go:81] duration metric: took 4.508298497s waiting for pod "kube-scheduler-no-preload-934668" in "kube-system" namespace to be "Ready" ...
	I0116 03:14:44.823818 1011460 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace to be "Ready" ...
	I0116 03:14:46.834119 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:44.967470 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:47.467233 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:45.939182 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:48.439510 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:46.623093 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:48.623452 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:49.333255 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:51.334349 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:49.962021 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:51.964770 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:50.439867 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:52.938999 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:54.939661 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:50.624537 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:52.631432 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:55.124303 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:53.334508 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:55.832976 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:53.965445 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:56.462907 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:58.463527 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:57.438920 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:59.440238 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:57.621578 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:59.625435 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:58.332671 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:00.831831 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:00.465186 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:02.965629 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:01.440271 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:03.938665 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:02.124017 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:04.623475 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:03.334393 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:05.831665 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:05.463235 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:07.467282 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:05.939523 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:07.940337 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:07.122018 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:09.128032 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:08.331820 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:10.831910 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:09.963317 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:11.966051 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:10.439441 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:12.440308 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:14.940075 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:11.626866 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:14.122414 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:13.332152 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:15.831466 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:14.462126 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:16.465823 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:16.940118 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:19.440426 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:16.124215 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:18.624377 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:17.832950 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:20.329770 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:18.962537 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:20.966990 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:23.467331 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:21.939074 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:23.939905 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:21.122701 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:23.124103 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:25.137599 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:22.332462 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:24.832064 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:25.965556 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:28.467190 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:26.440039 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:28.940196 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:27.626127 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:29.626656 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:27.335063 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:29.834492 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:30.963079 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:33.462526 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:31.441125 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:33.939106 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:32.122443 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:34.123801 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:32.332153 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:34.832479 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:35.963546 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:37.964525 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:35.939539 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:38.439743 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:36.126074 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:38.623002 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:37.332835 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:39.832398 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:40.463769 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:42.962649 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:40.441879 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:42.939722 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:41.123840 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:43.625404 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:42.331290 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:44.831904 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:46.835841 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:44.964678 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:47.462896 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:45.439209 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:47.440145 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:49.939854 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:46.123807 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:48.126826 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:49.332005 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:51.332502 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:49.464762 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:51.964049 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:51.939904 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:54.439236 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:50.623153 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:52.624345 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:54.627203 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:53.831895 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:55.832232 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:54.463030 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:56.963946 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:56.439394 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:58.939030 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:56.627957 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:59.123599 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:58.332413 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:00.332637 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:59.463703 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:01.964436 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:00.941424 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:03.439546 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:01.123729 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:03.124738 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:02.832493 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:04.832547 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:04.463420 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:06.463569 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:05.941019 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:07.944737 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:05.624443 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:08.122957 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:07.333014 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:09.832431 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:11.834194 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:08.963205 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:10.963471 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:13.463710 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:10.439631 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:12.940212 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:10.622909 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:12.627122 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:15.122958 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:14.332800 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:16.831137 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:15.466395 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:17.962126 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:15.440905 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:17.939481 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:19.939923 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:17.624106 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:19.624608 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:18.832920 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:20.833205 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:19.963345 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:22.464212 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:21.941453 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:24.440153 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:22.122244 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:24.123259 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:23.331669 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:25.331743 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:24.963259 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:26.963490 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:26.442666 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:28.939968 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:26.123378 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:28.125204 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:27.332247 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:29.831956 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:28.963524 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:30.964135 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:33.462993 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:31.439282 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:33.439561 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:30.623257 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:33.123409 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:32.330980 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:34.332254 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:36.332346 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:35.463102 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:37.466011 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:35.441431 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:37.938841 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:39.939708 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:35.622848 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:37.623714 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:39.624018 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:38.333242 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:40.333759 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:39.961985 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:41.963743 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:41.940877 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:44.439855 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:42.123548 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:44.123765 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:42.831179 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:44.832125 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:46.832823 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:44.464876 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:46.963061 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:46.940520 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:49.438035 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:46.622349 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:48.626247 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:49.331443 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:51.832493 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:48.963476 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:50.963937 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:53.463054 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:51.439462 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:53.938617 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:51.124901 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:53.621994 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:53.834097 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:56.331556 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:55.464589 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:57.465198 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:55.939032 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:57.939901 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:59.940433 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:55.623283 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:58.123546 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:58.831287 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:00.833045 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:59.963001 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:02.464145 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:02.438594 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:04.439026 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:00.623369 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:03.122925 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:03.336121 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:05.832499 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:04.962987 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:06.963706 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:06.439557 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:08.440103 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:05.623650 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:08.123661 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:08.333356 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:10.832246 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:09.462321 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:11.464231 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:10.440612 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:12.939770 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:10.622705 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:13.123057 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:15.123165 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:13.330980 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:15.331911 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:13.963350 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:15.965533 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:18.464316 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:15.439711 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:17.940475 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:19.940957 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:17.124102 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:19.124940 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:17.334609 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:19.832181 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:21.834883 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:20.468955 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:22.964039 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:22.441403 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:24.938835 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:21.624672 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:24.121761 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:24.332265 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:26.332655 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:25.463695 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:27.963694 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:27.963726 1011501 pod_ready.go:81] duration metric: took 4m0.008813288s waiting for pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace to be "Ready" ...
	E0116 03:17:27.963735 1011501 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0116 03:17:27.963742 1011501 pod_ready.go:38] duration metric: took 4m3.208815045s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:17:27.963758 1011501 api_server.go:52] waiting for apiserver process to appear ...
	I0116 03:17:27.963814 1011501 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0116 03:17:27.963886 1011501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0116 03:17:28.018667 1011501 cri.go:89] found id: "42d452ff0268fa05f5c5b20a7332caca85b7aea961642568ec84158f105b568f"
	I0116 03:17:28.018693 1011501 cri.go:89] found id: ""
	I0116 03:17:28.018701 1011501 logs.go:284] 1 containers: [42d452ff0268fa05f5c5b20a7332caca85b7aea961642568ec84158f105b568f]
	I0116 03:17:28.018769 1011501 ssh_runner.go:195] Run: which crictl
	I0116 03:17:28.023716 1011501 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0116 03:17:28.023802 1011501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0116 03:17:28.076139 1011501 cri.go:89] found id: "36288d0c42d12139e9355a5d562d600f6065e59336e28b57558b0bf5ea3f0618"
	I0116 03:17:28.076173 1011501 cri.go:89] found id: ""
	I0116 03:17:28.076182 1011501 logs.go:284] 1 containers: [36288d0c42d12139e9355a5d562d600f6065e59336e28b57558b0bf5ea3f0618]
	I0116 03:17:28.076233 1011501 ssh_runner.go:195] Run: which crictl
	I0116 03:17:28.080954 1011501 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0116 03:17:28.081020 1011501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0116 03:17:28.126518 1011501 cri.go:89] found id: "2cc211416aab65f634f78efecbcf662c33e971f6b6f1e0fb77492c1ef8e2cf70"
	I0116 03:17:28.126544 1011501 cri.go:89] found id: ""
	I0116 03:17:28.126552 1011501 logs.go:284] 1 containers: [2cc211416aab65f634f78efecbcf662c33e971f6b6f1e0fb77492c1ef8e2cf70]
	I0116 03:17:28.126611 1011501 ssh_runner.go:195] Run: which crictl
	I0116 03:17:28.131611 1011501 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0116 03:17:28.131692 1011501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0116 03:17:28.204571 1011501 cri.go:89] found id: "ab456031061355e2ec86ee36ff52b4245b5ccd2fd1f87da0318e9bbd4ca512e8"
	I0116 03:17:28.204604 1011501 cri.go:89] found id: ""
	I0116 03:17:28.204612 1011501 logs.go:284] 1 containers: [ab456031061355e2ec86ee36ff52b4245b5ccd2fd1f87da0318e9bbd4ca512e8]
	I0116 03:17:28.204672 1011501 ssh_runner.go:195] Run: which crictl
	I0116 03:17:28.210340 1011501 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0116 03:17:28.210415 1011501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0116 03:17:28.262556 1011501 cri.go:89] found id: "da3ca3a9cda0a5d7ff284cd8e9e069e0ef17c913570e52561f8c7cb8be285047"
	I0116 03:17:28.262587 1011501 cri.go:89] found id: ""
	I0116 03:17:28.262598 1011501 logs.go:284] 1 containers: [da3ca3a9cda0a5d7ff284cd8e9e069e0ef17c913570e52561f8c7cb8be285047]
	I0116 03:17:28.262666 1011501 ssh_runner.go:195] Run: which crictl
	I0116 03:17:28.267670 1011501 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0116 03:17:28.267763 1011501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0116 03:17:28.312958 1011501 cri.go:89] found id: "f75f023773154fd3722c861e7494aad7dd9f361e17a059735fef935507a94994"
	I0116 03:17:28.312982 1011501 cri.go:89] found id: ""
	I0116 03:17:28.312990 1011501 logs.go:284] 1 containers: [f75f023773154fd3722c861e7494aad7dd9f361e17a059735fef935507a94994]
	I0116 03:17:28.313040 1011501 ssh_runner.go:195] Run: which crictl
	I0116 03:17:28.317874 1011501 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0116 03:17:28.317951 1011501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0116 03:17:28.363140 1011501 cri.go:89] found id: ""
	I0116 03:17:28.363172 1011501 logs.go:284] 0 containers: []
	W0116 03:17:28.363181 1011501 logs.go:286] No container was found matching "kindnet"
	I0116 03:17:28.363188 1011501 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0116 03:17:28.363245 1011501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0116 03:17:28.408300 1011501 cri.go:89] found id: "0f37f0f7c73398c72353bd7fa20c656f6c90dffa7c4d73d01f9c6d8804319657"
	I0116 03:17:28.408330 1011501 cri.go:89] found id: "653a87cc5b4e5bb52f728e222fc7a58f19453b0263453d6c259e81a206ffac76"
	I0116 03:17:28.408335 1011501 cri.go:89] found id: ""
	I0116 03:17:28.408342 1011501 logs.go:284] 2 containers: [0f37f0f7c73398c72353bd7fa20c656f6c90dffa7c4d73d01f9c6d8804319657 653a87cc5b4e5bb52f728e222fc7a58f19453b0263453d6c259e81a206ffac76]
	I0116 03:17:28.408406 1011501 ssh_runner.go:195] Run: which crictl
	I0116 03:17:28.413146 1011501 ssh_runner.go:195] Run: which crictl
	I0116 03:17:28.418553 1011501 logs.go:123] Gathering logs for kube-scheduler [ab456031061355e2ec86ee36ff52b4245b5ccd2fd1f87da0318e9bbd4ca512e8] ...
	I0116 03:17:28.418588 1011501 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ab456031061355e2ec86ee36ff52b4245b5ccd2fd1f87da0318e9bbd4ca512e8"
	I0116 03:17:28.466255 1011501 logs.go:123] Gathering logs for kube-proxy [da3ca3a9cda0a5d7ff284cd8e9e069e0ef17c913570e52561f8c7cb8be285047] ...
	I0116 03:17:28.466305 1011501 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 da3ca3a9cda0a5d7ff284cd8e9e069e0ef17c913570e52561f8c7cb8be285047"
	I0116 03:17:28.511913 1011501 logs.go:123] Gathering logs for storage-provisioner [0f37f0f7c73398c72353bd7fa20c656f6c90dffa7c4d73d01f9c6d8804319657] ...
	I0116 03:17:28.511954 1011501 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f37f0f7c73398c72353bd7fa20c656f6c90dffa7c4d73d01f9c6d8804319657"
	I0116 03:17:28.551053 1011501 logs.go:123] Gathering logs for dmesg ...
	I0116 03:17:28.551093 1011501 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0116 03:17:28.571627 1011501 logs.go:123] Gathering logs for kube-controller-manager [f75f023773154fd3722c861e7494aad7dd9f361e17a059735fef935507a94994] ...
	I0116 03:17:28.571663 1011501 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f75f023773154fd3722c861e7494aad7dd9f361e17a059735fef935507a94994"
	I0116 03:17:28.631193 1011501 logs.go:123] Gathering logs for storage-provisioner [653a87cc5b4e5bb52f728e222fc7a58f19453b0263453d6c259e81a206ffac76] ...
	I0116 03:17:28.631236 1011501 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 653a87cc5b4e5bb52f728e222fc7a58f19453b0263453d6c259e81a206ffac76"
	I0116 03:17:28.671010 1011501 logs.go:123] Gathering logs for CRI-O ...
	I0116 03:17:28.671047 1011501 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0116 03:17:26.940503 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:28.941291 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:26.123594 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:28.124053 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:28.341231 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:30.831479 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:29.167771 1011501 logs.go:123] Gathering logs for describe nodes ...
	I0116 03:17:29.167828 1011501 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0116 03:17:29.340535 1011501 logs.go:123] Gathering logs for etcd [36288d0c42d12139e9355a5d562d600f6065e59336e28b57558b0bf5ea3f0618] ...
	I0116 03:17:29.340574 1011501 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 36288d0c42d12139e9355a5d562d600f6065e59336e28b57558b0bf5ea3f0618"
	I0116 03:17:29.397815 1011501 logs.go:123] Gathering logs for container status ...
	I0116 03:17:29.397861 1011501 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0116 03:17:29.459355 1011501 logs.go:123] Gathering logs for kubelet ...
	I0116 03:17:29.459408 1011501 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0116 03:17:29.519244 1011501 logs.go:123] Gathering logs for kube-apiserver [42d452ff0268fa05f5c5b20a7332caca85b7aea961642568ec84158f105b568f] ...
	I0116 03:17:29.519289 1011501 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 42d452ff0268fa05f5c5b20a7332caca85b7aea961642568ec84158f105b568f"
	I0116 03:17:29.577686 1011501 logs.go:123] Gathering logs for coredns [2cc211416aab65f634f78efecbcf662c33e971f6b6f1e0fb77492c1ef8e2cf70] ...
	I0116 03:17:29.577736 1011501 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2cc211416aab65f634f78efecbcf662c33e971f6b6f1e0fb77492c1ef8e2cf70"
	I0116 03:17:32.124219 1011501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:17:32.141191 1011501 api_server.go:72] duration metric: took 4m13.431910425s to wait for apiserver process to appear ...
	I0116 03:17:32.141224 1011501 api_server.go:88] waiting for apiserver healthz status ...
	I0116 03:17:32.141316 1011501 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0116 03:17:32.141397 1011501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0116 03:17:32.182105 1011501 cri.go:89] found id: "42d452ff0268fa05f5c5b20a7332caca85b7aea961642568ec84158f105b568f"
	I0116 03:17:32.182133 1011501 cri.go:89] found id: ""
	I0116 03:17:32.182142 1011501 logs.go:284] 1 containers: [42d452ff0268fa05f5c5b20a7332caca85b7aea961642568ec84158f105b568f]
	I0116 03:17:32.182200 1011501 ssh_runner.go:195] Run: which crictl
	I0116 03:17:32.186819 1011501 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0116 03:17:32.186900 1011501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0116 03:17:32.234240 1011501 cri.go:89] found id: "36288d0c42d12139e9355a5d562d600f6065e59336e28b57558b0bf5ea3f0618"
	I0116 03:17:32.234282 1011501 cri.go:89] found id: ""
	I0116 03:17:32.234294 1011501 logs.go:284] 1 containers: [36288d0c42d12139e9355a5d562d600f6065e59336e28b57558b0bf5ea3f0618]
	I0116 03:17:32.234366 1011501 ssh_runner.go:195] Run: which crictl
	I0116 03:17:32.240481 1011501 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0116 03:17:32.240550 1011501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0116 03:17:32.284981 1011501 cri.go:89] found id: "2cc211416aab65f634f78efecbcf662c33e971f6b6f1e0fb77492c1ef8e2cf70"
	I0116 03:17:32.285016 1011501 cri.go:89] found id: ""
	I0116 03:17:32.285028 1011501 logs.go:284] 1 containers: [2cc211416aab65f634f78efecbcf662c33e971f6b6f1e0fb77492c1ef8e2cf70]
	I0116 03:17:32.285095 1011501 ssh_runner.go:195] Run: which crictl
	I0116 03:17:32.289894 1011501 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0116 03:17:32.289985 1011501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0116 03:17:32.331520 1011501 cri.go:89] found id: "ab456031061355e2ec86ee36ff52b4245b5ccd2fd1f87da0318e9bbd4ca512e8"
	I0116 03:17:32.331555 1011501 cri.go:89] found id: ""
	I0116 03:17:32.331567 1011501 logs.go:284] 1 containers: [ab456031061355e2ec86ee36ff52b4245b5ccd2fd1f87da0318e9bbd4ca512e8]
	I0116 03:17:32.331646 1011501 ssh_runner.go:195] Run: which crictl
	I0116 03:17:32.336053 1011501 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0116 03:17:32.336131 1011501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0116 03:17:32.383199 1011501 cri.go:89] found id: "da3ca3a9cda0a5d7ff284cd8e9e069e0ef17c913570e52561f8c7cb8be285047"
	I0116 03:17:32.383233 1011501 cri.go:89] found id: ""
	I0116 03:17:32.383253 1011501 logs.go:284] 1 containers: [da3ca3a9cda0a5d7ff284cd8e9e069e0ef17c913570e52561f8c7cb8be285047]
	I0116 03:17:32.383324 1011501 ssh_runner.go:195] Run: which crictl
	I0116 03:17:32.388197 1011501 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0116 03:17:32.388278 1011501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0116 03:17:32.435679 1011501 cri.go:89] found id: "f75f023773154fd3722c861e7494aad7dd9f361e17a059735fef935507a94994"
	I0116 03:17:32.435711 1011501 cri.go:89] found id: ""
	I0116 03:17:32.435722 1011501 logs.go:284] 1 containers: [f75f023773154fd3722c861e7494aad7dd9f361e17a059735fef935507a94994]
	I0116 03:17:32.435795 1011501 ssh_runner.go:195] Run: which crictl
	I0116 03:17:32.441503 1011501 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0116 03:17:32.441578 1011501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0116 03:17:32.484750 1011501 cri.go:89] found id: ""
	I0116 03:17:32.484783 1011501 logs.go:284] 0 containers: []
	W0116 03:17:32.484794 1011501 logs.go:286] No container was found matching "kindnet"
	I0116 03:17:32.484803 1011501 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0116 03:17:32.484872 1011501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0116 03:17:32.534967 1011501 cri.go:89] found id: "0f37f0f7c73398c72353bd7fa20c656f6c90dffa7c4d73d01f9c6d8804319657"
	I0116 03:17:32.534996 1011501 cri.go:89] found id: "653a87cc5b4e5bb52f728e222fc7a58f19453b0263453d6c259e81a206ffac76"
	I0116 03:17:32.535002 1011501 cri.go:89] found id: ""
	I0116 03:17:32.535011 1011501 logs.go:284] 2 containers: [0f37f0f7c73398c72353bd7fa20c656f6c90dffa7c4d73d01f9c6d8804319657 653a87cc5b4e5bb52f728e222fc7a58f19453b0263453d6c259e81a206ffac76]
	I0116 03:17:32.535079 1011501 ssh_runner.go:195] Run: which crictl
	I0116 03:17:32.539828 1011501 ssh_runner.go:195] Run: which crictl
	I0116 03:17:32.544640 1011501 logs.go:123] Gathering logs for describe nodes ...
	I0116 03:17:32.544670 1011501 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0116 03:17:32.681760 1011501 logs.go:123] Gathering logs for kube-apiserver [42d452ff0268fa05f5c5b20a7332caca85b7aea961642568ec84158f105b568f] ...
	I0116 03:17:32.681831 1011501 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 42d452ff0268fa05f5c5b20a7332caca85b7aea961642568ec84158f105b568f"
	I0116 03:17:32.741557 1011501 logs.go:123] Gathering logs for container status ...
	I0116 03:17:32.741606 1011501 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0116 03:17:32.791811 1011501 logs.go:123] Gathering logs for CRI-O ...
	I0116 03:17:32.791857 1011501 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0116 03:17:33.242377 1011501 logs.go:123] Gathering logs for etcd [36288d0c42d12139e9355a5d562d600f6065e59336e28b57558b0bf5ea3f0618] ...
	I0116 03:17:33.242424 1011501 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 36288d0c42d12139e9355a5d562d600f6065e59336e28b57558b0bf5ea3f0618"
	I0116 03:17:33.303162 1011501 logs.go:123] Gathering logs for kube-proxy [da3ca3a9cda0a5d7ff284cd8e9e069e0ef17c913570e52561f8c7cb8be285047] ...
	I0116 03:17:33.303211 1011501 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 da3ca3a9cda0a5d7ff284cd8e9e069e0ef17c913570e52561f8c7cb8be285047"
	I0116 03:17:33.346935 1011501 logs.go:123] Gathering logs for storage-provisioner [653a87cc5b4e5bb52f728e222fc7a58f19453b0263453d6c259e81a206ffac76] ...
	I0116 03:17:33.346975 1011501 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 653a87cc5b4e5bb52f728e222fc7a58f19453b0263453d6c259e81a206ffac76"
	I0116 03:17:33.393563 1011501 logs.go:123] Gathering logs for kube-controller-manager [f75f023773154fd3722c861e7494aad7dd9f361e17a059735fef935507a94994] ...
	I0116 03:17:33.393603 1011501 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f75f023773154fd3722c861e7494aad7dd9f361e17a059735fef935507a94994"
	I0116 03:17:33.453859 1011501 logs.go:123] Gathering logs for storage-provisioner [0f37f0f7c73398c72353bd7fa20c656f6c90dffa7c4d73d01f9c6d8804319657] ...
	I0116 03:17:33.453902 1011501 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f37f0f7c73398c72353bd7fa20c656f6c90dffa7c4d73d01f9c6d8804319657"
	I0116 03:17:33.492763 1011501 logs.go:123] Gathering logs for kubelet ...
	I0116 03:17:33.492797 1011501 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0116 03:17:33.555700 1011501 logs.go:123] Gathering logs for coredns [2cc211416aab65f634f78efecbcf662c33e971f6b6f1e0fb77492c1ef8e2cf70] ...
	I0116 03:17:33.555742 1011501 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2cc211416aab65f634f78efecbcf662c33e971f6b6f1e0fb77492c1ef8e2cf70"
	I0116 03:17:33.601049 1011501 logs.go:123] Gathering logs for kube-scheduler [ab456031061355e2ec86ee36ff52b4245b5ccd2fd1f87da0318e9bbd4ca512e8] ...
	I0116 03:17:33.601084 1011501 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ab456031061355e2ec86ee36ff52b4245b5ccd2fd1f87da0318e9bbd4ca512e8"
	I0116 03:17:33.652000 1011501 logs.go:123] Gathering logs for dmesg ...
	I0116 03:17:33.652035 1011501 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0116 03:17:31.438487 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:33.440493 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:30.621532 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:32.622315 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:34.622840 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:32.832920 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:35.331711 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:36.168102 1011501 api_server.go:253] Checking apiserver healthz at https://192.168.61.150:8443/healthz ...
	I0116 03:17:36.173921 1011501 api_server.go:279] https://192.168.61.150:8443/healthz returned 200:
	ok
	I0116 03:17:36.175763 1011501 api_server.go:141] control plane version: v1.28.4
	I0116 03:17:36.175789 1011501 api_server.go:131] duration metric: took 4.034557823s to wait for apiserver health ...
	I0116 03:17:36.175798 1011501 system_pods.go:43] waiting for kube-system pods to appear ...
	I0116 03:17:36.175826 1011501 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0116 03:17:36.175890 1011501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0116 03:17:36.224810 1011501 cri.go:89] found id: "42d452ff0268fa05f5c5b20a7332caca85b7aea961642568ec84158f105b568f"
	I0116 03:17:36.224847 1011501 cri.go:89] found id: ""
	I0116 03:17:36.224859 1011501 logs.go:284] 1 containers: [42d452ff0268fa05f5c5b20a7332caca85b7aea961642568ec84158f105b568f]
	I0116 03:17:36.224925 1011501 ssh_runner.go:195] Run: which crictl
	I0116 03:17:36.229177 1011501 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0116 03:17:36.229255 1011501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0116 03:17:36.271241 1011501 cri.go:89] found id: "36288d0c42d12139e9355a5d562d600f6065e59336e28b57558b0bf5ea3f0618"
	I0116 03:17:36.271272 1011501 cri.go:89] found id: ""
	I0116 03:17:36.271281 1011501 logs.go:284] 1 containers: [36288d0c42d12139e9355a5d562d600f6065e59336e28b57558b0bf5ea3f0618]
	I0116 03:17:36.271342 1011501 ssh_runner.go:195] Run: which crictl
	I0116 03:17:36.275772 1011501 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0116 03:17:36.275846 1011501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0116 03:17:36.319867 1011501 cri.go:89] found id: "2cc211416aab65f634f78efecbcf662c33e971f6b6f1e0fb77492c1ef8e2cf70"
	I0116 03:17:36.319899 1011501 cri.go:89] found id: ""
	I0116 03:17:36.319909 1011501 logs.go:284] 1 containers: [2cc211416aab65f634f78efecbcf662c33e971f6b6f1e0fb77492c1ef8e2cf70]
	I0116 03:17:36.319977 1011501 ssh_runner.go:195] Run: which crictl
	I0116 03:17:36.324329 1011501 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0116 03:17:36.324410 1011501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0116 03:17:36.363526 1011501 cri.go:89] found id: "ab456031061355e2ec86ee36ff52b4245b5ccd2fd1f87da0318e9bbd4ca512e8"
	I0116 03:17:36.363551 1011501 cri.go:89] found id: ""
	I0116 03:17:36.363559 1011501 logs.go:284] 1 containers: [ab456031061355e2ec86ee36ff52b4245b5ccd2fd1f87da0318e9bbd4ca512e8]
	I0116 03:17:36.363614 1011501 ssh_runner.go:195] Run: which crictl
	I0116 03:17:36.367896 1011501 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0116 03:17:36.367974 1011501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0116 03:17:36.408601 1011501 cri.go:89] found id: "da3ca3a9cda0a5d7ff284cd8e9e069e0ef17c913570e52561f8c7cb8be285047"
	I0116 03:17:36.408642 1011501 cri.go:89] found id: ""
	I0116 03:17:36.408657 1011501 logs.go:284] 1 containers: [da3ca3a9cda0a5d7ff284cd8e9e069e0ef17c913570e52561f8c7cb8be285047]
	I0116 03:17:36.408715 1011501 ssh_runner.go:195] Run: which crictl
	I0116 03:17:36.413041 1011501 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0116 03:17:36.413111 1011501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0116 03:17:36.460091 1011501 cri.go:89] found id: "f75f023773154fd3722c861e7494aad7dd9f361e17a059735fef935507a94994"
	I0116 03:17:36.460117 1011501 cri.go:89] found id: ""
	I0116 03:17:36.460126 1011501 logs.go:284] 1 containers: [f75f023773154fd3722c861e7494aad7dd9f361e17a059735fef935507a94994]
	I0116 03:17:36.460201 1011501 ssh_runner.go:195] Run: which crictl
	I0116 03:17:36.464375 1011501 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0116 03:17:36.464457 1011501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0116 03:17:36.501943 1011501 cri.go:89] found id: ""
	I0116 03:17:36.501969 1011501 logs.go:284] 0 containers: []
	W0116 03:17:36.501977 1011501 logs.go:286] No container was found matching "kindnet"
	I0116 03:17:36.501984 1011501 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0116 03:17:36.502037 1011501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0116 03:17:36.550841 1011501 cri.go:89] found id: "0f37f0f7c73398c72353bd7fa20c656f6c90dffa7c4d73d01f9c6d8804319657"
	I0116 03:17:36.550874 1011501 cri.go:89] found id: "653a87cc5b4e5bb52f728e222fc7a58f19453b0263453d6c259e81a206ffac76"
	I0116 03:17:36.550882 1011501 cri.go:89] found id: ""
	I0116 03:17:36.550892 1011501 logs.go:284] 2 containers: [0f37f0f7c73398c72353bd7fa20c656f6c90dffa7c4d73d01f9c6d8804319657 653a87cc5b4e5bb52f728e222fc7a58f19453b0263453d6c259e81a206ffac76]
	I0116 03:17:36.550976 1011501 ssh_runner.go:195] Run: which crictl
	I0116 03:17:36.555728 1011501 ssh_runner.go:195] Run: which crictl
	I0116 03:17:36.560058 1011501 logs.go:123] Gathering logs for kube-controller-manager [f75f023773154fd3722c861e7494aad7dd9f361e17a059735fef935507a94994] ...
	I0116 03:17:36.560087 1011501 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f75f023773154fd3722c861e7494aad7dd9f361e17a059735fef935507a94994"
	I0116 03:17:36.618163 1011501 logs.go:123] Gathering logs for kubelet ...
	I0116 03:17:36.618208 1011501 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0116 03:17:36.673167 1011501 logs.go:123] Gathering logs for dmesg ...
	I0116 03:17:36.673216 1011501 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0116 03:17:36.690061 1011501 logs.go:123] Gathering logs for storage-provisioner [0f37f0f7c73398c72353bd7fa20c656f6c90dffa7c4d73d01f9c6d8804319657] ...
	I0116 03:17:36.690099 1011501 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f37f0f7c73398c72353bd7fa20c656f6c90dffa7c4d73d01f9c6d8804319657"
	I0116 03:17:36.732953 1011501 logs.go:123] Gathering logs for CRI-O ...
	I0116 03:17:36.733013 1011501 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0116 03:17:37.127465 1011501 logs.go:123] Gathering logs for container status ...
	I0116 03:17:37.127504 1011501 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0116 03:17:37.176618 1011501 logs.go:123] Gathering logs for coredns [2cc211416aab65f634f78efecbcf662c33e971f6b6f1e0fb77492c1ef8e2cf70] ...
	I0116 03:17:37.176660 1011501 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2cc211416aab65f634f78efecbcf662c33e971f6b6f1e0fb77492c1ef8e2cf70"
	I0116 03:17:37.223851 1011501 logs.go:123] Gathering logs for kube-scheduler [ab456031061355e2ec86ee36ff52b4245b5ccd2fd1f87da0318e9bbd4ca512e8] ...
	I0116 03:17:37.223895 1011501 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ab456031061355e2ec86ee36ff52b4245b5ccd2fd1f87da0318e9bbd4ca512e8"
	I0116 03:17:37.265502 1011501 logs.go:123] Gathering logs for kube-proxy [da3ca3a9cda0a5d7ff284cd8e9e069e0ef17c913570e52561f8c7cb8be285047] ...
	I0116 03:17:37.265542 1011501 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 da3ca3a9cda0a5d7ff284cd8e9e069e0ef17c913570e52561f8c7cb8be285047"
	I0116 03:17:37.323107 1011501 logs.go:123] Gathering logs for storage-provisioner [653a87cc5b4e5bb52f728e222fc7a58f19453b0263453d6c259e81a206ffac76] ...
	I0116 03:17:37.323140 1011501 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 653a87cc5b4e5bb52f728e222fc7a58f19453b0263453d6c259e81a206ffac76"
	I0116 03:17:37.368305 1011501 logs.go:123] Gathering logs for describe nodes ...
	I0116 03:17:37.368348 1011501 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0116 03:17:37.519310 1011501 logs.go:123] Gathering logs for kube-apiserver [42d452ff0268fa05f5c5b20a7332caca85b7aea961642568ec84158f105b568f] ...
	I0116 03:17:37.519352 1011501 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 42d452ff0268fa05f5c5b20a7332caca85b7aea961642568ec84158f105b568f"
	I0116 03:17:37.580961 1011501 logs.go:123] Gathering logs for etcd [36288d0c42d12139e9355a5d562d600f6065e59336e28b57558b0bf5ea3f0618] ...
	I0116 03:17:37.581000 1011501 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 36288d0c42d12139e9355a5d562d600f6065e59336e28b57558b0bf5ea3f0618"
	I0116 03:17:35.940233 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:38.439452 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:40.146809 1011501 system_pods.go:59] 8 kube-system pods found
	I0116 03:17:40.146843 1011501 system_pods.go:61] "coredns-5dd5756b68-stqh5" [adbcef96-218b-42ed-9daf-72c274be0690] Running
	I0116 03:17:40.146849 1011501 system_pods.go:61] "etcd-embed-certs-480663" [6694af11-3b1a-4b84-adcb-3416b87a076f] Running
	I0116 03:17:40.146853 1011501 system_pods.go:61] "kube-apiserver-embed-certs-480663" [3a3d0e5d-35eb-4f2f-9686-b17af19bc777] Running
	I0116 03:17:40.146857 1011501 system_pods.go:61] "kube-controller-manager-embed-certs-480663" [729b671f-23b9-409b-9f11-d6992f2355c7] Running
	I0116 03:17:40.146861 1011501 system_pods.go:61] "kube-proxy-j4786" [aabb98a7-fe55-4105-a5d2-c1e312464107] Running
	I0116 03:17:40.146865 1011501 system_pods.go:61] "kube-scheduler-embed-certs-480663" [11baa5d7-6e7b-4a25-be6f-e7975b51023d] Running
	I0116 03:17:40.146872 1011501 system_pods.go:61] "metrics-server-57f55c9bc5-7d2fh" [512cf579-f335-4995-8721-74bb84da776e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:17:40.146877 1011501 system_pods.go:61] "storage-provisioner" [da59ff59-869f-48a9-a5c5-c95bb807cbcf] Running
	I0116 03:17:40.146887 1011501 system_pods.go:74] duration metric: took 3.971081813s to wait for pod list to return data ...
	I0116 03:17:40.146900 1011501 default_sa.go:34] waiting for default service account to be created ...
	I0116 03:17:40.149755 1011501 default_sa.go:45] found service account: "default"
	I0116 03:17:40.149786 1011501 default_sa.go:55] duration metric: took 2.87163ms for default service account to be created ...
	I0116 03:17:40.149798 1011501 system_pods.go:116] waiting for k8s-apps to be running ...
	I0116 03:17:40.156300 1011501 system_pods.go:86] 8 kube-system pods found
	I0116 03:17:40.156327 1011501 system_pods.go:89] "coredns-5dd5756b68-stqh5" [adbcef96-218b-42ed-9daf-72c274be0690] Running
	I0116 03:17:40.156333 1011501 system_pods.go:89] "etcd-embed-certs-480663" [6694af11-3b1a-4b84-adcb-3416b87a076f] Running
	I0116 03:17:40.156337 1011501 system_pods.go:89] "kube-apiserver-embed-certs-480663" [3a3d0e5d-35eb-4f2f-9686-b17af19bc777] Running
	I0116 03:17:40.156341 1011501 system_pods.go:89] "kube-controller-manager-embed-certs-480663" [729b671f-23b9-409b-9f11-d6992f2355c7] Running
	I0116 03:17:40.156345 1011501 system_pods.go:89] "kube-proxy-j4786" [aabb98a7-fe55-4105-a5d2-c1e312464107] Running
	I0116 03:17:40.156349 1011501 system_pods.go:89] "kube-scheduler-embed-certs-480663" [11baa5d7-6e7b-4a25-be6f-e7975b51023d] Running
	I0116 03:17:40.156355 1011501 system_pods.go:89] "metrics-server-57f55c9bc5-7d2fh" [512cf579-f335-4995-8721-74bb84da776e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:17:40.156360 1011501 system_pods.go:89] "storage-provisioner" [da59ff59-869f-48a9-a5c5-c95bb807cbcf] Running
	I0116 03:17:40.156367 1011501 system_pods.go:126] duration metric: took 6.548782ms to wait for k8s-apps to be running ...
	I0116 03:17:40.156374 1011501 system_svc.go:44] waiting for kubelet service to be running ....
	I0116 03:17:40.156421 1011501 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 03:17:40.173539 1011501 system_svc.go:56] duration metric: took 17.152768ms WaitForService to wait for kubelet.
	I0116 03:17:40.173574 1011501 kubeadm.go:581] duration metric: took 4m21.464303041s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0116 03:17:40.173623 1011501 node_conditions.go:102] verifying NodePressure condition ...
	I0116 03:17:40.177277 1011501 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 03:17:40.177309 1011501 node_conditions.go:123] node cpu capacity is 2
	I0116 03:17:40.177324 1011501 node_conditions.go:105] duration metric: took 3.695642ms to run NodePressure ...
	I0116 03:17:40.177336 1011501 start.go:228] waiting for startup goroutines ...
	I0116 03:17:40.177342 1011501 start.go:233] waiting for cluster config update ...
	I0116 03:17:40.177353 1011501 start.go:242] writing updated cluster config ...
	I0116 03:17:40.177673 1011501 ssh_runner.go:195] Run: rm -f paused
	I0116 03:17:40.237611 1011501 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I0116 03:17:40.239605 1011501 out.go:177] * Done! kubectl is now configured to use "embed-certs-480663" cluster and "default" namespace by default
	I0116 03:17:36.624876 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:39.123549 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:37.332861 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:39.832707 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:40.440194 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:42.939505 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:41.123729 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:43.124392 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:42.335659 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:44.833290 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:45.438892 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:47.439827 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:49.440946 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:45.622763 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:47.623098 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:49.623524 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:47.331849 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:49.832349 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:51.938022 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:53.939098 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:52.122851 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:54.123517 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:52.333667 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:54.832564 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:55.939981 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:57.941055 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:56.623347 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:59.123492 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:57.332003 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:59.332838 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:18:01.333665 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:18:00.440795 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:18:02.939475 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:18:01.623191 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:18:03.623475 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:18:03.831584 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:18:05.832669 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:18:05.438818 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:18:07.940446 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:18:06.125503 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:18:08.624414 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:18:07.832961 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:18:10.332435 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:18:10.439517 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:18:12.938184 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:18:14.939116 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:18:10.626134 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:18:13.123124 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:18:14.116258 1011955 pod_ready.go:81] duration metric: took 4m0.000962112s waiting for pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace to be "Ready" ...
	E0116 03:18:14.116292 1011955 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace to be "Ready" (will not retry!)
	I0116 03:18:14.116325 1011955 pod_ready.go:38] duration metric: took 4m14.081176627s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:18:14.116391 1011955 kubeadm.go:640] restartCluster took 4m34.84299912s
	W0116 03:18:14.116515 1011955 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0116 03:18:14.116555 1011955 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0116 03:18:12.832787 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:18:14.833104 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:18:16.833154 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:18:16.939522 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:18:17.932247 1011681 pod_ready.go:81] duration metric: took 4m0.000397189s waiting for pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace to be "Ready" ...
	E0116 03:18:17.932288 1011681 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace to be "Ready" (will not retry!)
	I0116 03:18:17.932314 1011681 pod_ready.go:38] duration metric: took 4m1.200532474s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:18:17.932356 1011681 kubeadm.go:640] restartCluster took 4m59.25901651s
	W0116 03:18:17.932448 1011681 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0116 03:18:17.932484 1011681 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0116 03:18:19.332379 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:18:21.332813 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:18:24.791837 1011681 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (6.859306364s)
	I0116 03:18:24.791938 1011681 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 03:18:24.810486 1011681 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0116 03:18:24.822414 1011681 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0116 03:18:24.834751 1011681 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0116 03:18:24.834814 1011681 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0116 03:18:25.070509 1011681 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0116 03:18:23.832402 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:18:25.834563 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:18:28.584480 1011955 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (14.467896175s)
	I0116 03:18:28.584554 1011955 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 03:18:28.602324 1011955 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0116 03:18:28.614934 1011955 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0116 03:18:28.624508 1011955 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0116 03:18:28.624564 1011955 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0116 03:18:28.679880 1011955 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0116 03:18:28.679970 1011955 kubeadm.go:322] [preflight] Running pre-flight checks
	I0116 03:18:28.862872 1011955 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0116 03:18:28.862987 1011955 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0116 03:18:28.863151 1011955 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0116 03:18:29.129842 1011955 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0116 03:18:29.131728 1011955 out.go:204]   - Generating certificates and keys ...
	I0116 03:18:29.131835 1011955 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0116 03:18:29.131918 1011955 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0116 03:18:29.132072 1011955 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0116 03:18:29.132174 1011955 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0116 03:18:29.132294 1011955 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0116 03:18:29.132393 1011955 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0116 03:18:29.132472 1011955 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0116 03:18:29.132553 1011955 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0116 03:18:29.132646 1011955 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0116 03:18:29.132781 1011955 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0116 03:18:29.132867 1011955 kubeadm.go:322] [certs] Using the existing "sa" key
	I0116 03:18:29.132972 1011955 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0116 03:18:29.254715 1011955 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0116 03:18:29.440667 1011955 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0116 03:18:29.640243 1011955 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0116 03:18:29.792291 1011955 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0116 03:18:29.793072 1011955 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0116 03:18:29.799431 1011955 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0116 03:18:29.801398 1011955 out.go:204]   - Booting up control plane ...
	I0116 03:18:29.801516 1011955 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0116 03:18:29.801601 1011955 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0116 03:18:29.801686 1011955 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0116 03:18:29.820061 1011955 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0116 03:18:29.823043 1011955 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0116 03:18:29.823191 1011955 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0116 03:18:29.951227 1011955 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0116 03:18:27.835298 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:18:30.331925 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:18:32.332063 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:18:34.333064 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:18:36.833631 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:18:38.602437 1011681 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0116 03:18:38.602518 1011681 kubeadm.go:322] [preflight] Running pre-flight checks
	I0116 03:18:38.602608 1011681 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0116 03:18:38.602737 1011681 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0116 03:18:38.602861 1011681 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0116 03:18:38.602991 1011681 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0116 03:18:38.603089 1011681 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0116 03:18:38.603148 1011681 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0116 03:18:38.603223 1011681 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0116 03:18:38.604856 1011681 out.go:204]   - Generating certificates and keys ...
	I0116 03:18:38.604966 1011681 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0116 03:18:38.605046 1011681 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0116 03:18:38.605139 1011681 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0116 03:18:38.605222 1011681 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0116 03:18:38.605299 1011681 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0116 03:18:38.605359 1011681 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0116 03:18:38.605446 1011681 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0116 03:18:38.605510 1011681 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0116 03:18:38.605570 1011681 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0116 03:18:38.605629 1011681 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0116 03:18:38.605662 1011681 kubeadm.go:322] [certs] Using the existing "sa" key
	I0116 03:18:38.605707 1011681 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0116 03:18:38.605749 1011681 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0116 03:18:38.605792 1011681 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0116 03:18:38.605878 1011681 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0116 03:18:38.605964 1011681 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0116 03:18:38.606070 1011681 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0116 03:18:38.608024 1011681 out.go:204]   - Booting up control plane ...
	I0116 03:18:38.608146 1011681 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0116 03:18:38.608263 1011681 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0116 03:18:38.608375 1011681 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0116 03:18:38.608508 1011681 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0116 03:18:38.608676 1011681 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0116 03:18:38.608755 1011681 kubeadm.go:322] [apiclient] All control plane components are healthy after 10.506014 seconds
	I0116 03:18:38.608891 1011681 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0116 03:18:38.609075 1011681 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I0116 03:18:38.609173 1011681 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0116 03:18:38.609358 1011681 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-788237 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0116 03:18:38.609437 1011681 kubeadm.go:322] [bootstrap-token] Using token: ou2w4b.xm5ff9ai4zzr80lg
	I0116 03:18:38.611110 1011681 out.go:204]   - Configuring RBAC rules ...
	I0116 03:18:38.611236 1011681 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0116 03:18:38.611429 1011681 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0116 03:18:38.611590 1011681 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0116 03:18:38.611730 1011681 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0116 03:18:38.611834 1011681 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0116 03:18:38.611886 1011681 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0116 03:18:38.611942 1011681 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0116 03:18:38.611948 1011681 kubeadm.go:322] 
	I0116 03:18:38.612019 1011681 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0116 03:18:38.612024 1011681 kubeadm.go:322] 
	I0116 03:18:38.612116 1011681 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0116 03:18:38.612122 1011681 kubeadm.go:322] 
	I0116 03:18:38.612153 1011681 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0116 03:18:38.612235 1011681 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0116 03:18:38.612296 1011681 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0116 03:18:38.612302 1011681 kubeadm.go:322] 
	I0116 03:18:38.612363 1011681 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0116 03:18:38.612452 1011681 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0116 03:18:38.612535 1011681 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0116 03:18:38.612541 1011681 kubeadm.go:322] 
	I0116 03:18:38.612641 1011681 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I0116 03:18:38.612732 1011681 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0116 03:18:38.612738 1011681 kubeadm.go:322] 
	I0116 03:18:38.612838 1011681 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token ou2w4b.xm5ff9ai4zzr80lg \
	I0116 03:18:38.612975 1011681 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:2b2d63eda0b757db09586d44c0338fc83976b062d99727312f950d3846e4844b \
	I0116 03:18:38.613007 1011681 kubeadm.go:322]     --control-plane 	  
	I0116 03:18:38.613013 1011681 kubeadm.go:322] 
	I0116 03:18:38.613115 1011681 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0116 03:18:38.613122 1011681 kubeadm.go:322] 
	I0116 03:18:38.613224 1011681 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token ou2w4b.xm5ff9ai4zzr80lg \
	I0116 03:18:38.613366 1011681 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:2b2d63eda0b757db09586d44c0338fc83976b062d99727312f950d3846e4844b 
	I0116 03:18:38.613378 1011681 cni.go:84] Creating CNI manager for ""
	I0116 03:18:38.613386 1011681 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 03:18:38.615140 1011681 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0116 03:18:38.454228 1011955 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.502851 seconds
	I0116 03:18:38.454363 1011955 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0116 03:18:38.474581 1011955 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0116 03:18:39.018312 1011955 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0116 03:18:39.018620 1011955 kubeadm.go:322] [mark-control-plane] Marking the node default-k8s-diff-port-775571 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0116 03:18:39.535782 1011955 kubeadm.go:322] [bootstrap-token] Using token: 8fntor.yrfb8kfaxajcp5qt
	I0116 03:18:39.537357 1011955 out.go:204]   - Configuring RBAC rules ...
	I0116 03:18:39.537505 1011955 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0116 03:18:39.552902 1011955 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0116 03:18:39.571482 1011955 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0116 03:18:39.575866 1011955 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0116 03:18:39.581062 1011955 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0116 03:18:39.586833 1011955 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0116 03:18:39.619342 1011955 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0116 03:18:39.888315 1011955 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0116 03:18:39.966804 1011955 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0116 03:18:39.971287 1011955 kubeadm.go:322] 
	I0116 03:18:39.971371 1011955 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0116 03:18:39.971383 1011955 kubeadm.go:322] 
	I0116 03:18:39.971472 1011955 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0116 03:18:39.971482 1011955 kubeadm.go:322] 
	I0116 03:18:39.971556 1011955 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0116 03:18:39.971657 1011955 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0116 03:18:39.971750 1011955 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0116 03:18:39.971761 1011955 kubeadm.go:322] 
	I0116 03:18:39.971835 1011955 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0116 03:18:39.971846 1011955 kubeadm.go:322] 
	I0116 03:18:39.971927 1011955 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0116 03:18:39.971941 1011955 kubeadm.go:322] 
	I0116 03:18:39.971984 1011955 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0116 03:18:39.972080 1011955 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0116 03:18:39.972187 1011955 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0116 03:18:39.972199 1011955 kubeadm.go:322] 
	I0116 03:18:39.972317 1011955 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0116 03:18:39.972431 1011955 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0116 03:18:39.972450 1011955 kubeadm.go:322] 
	I0116 03:18:39.972580 1011955 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8444 --token 8fntor.yrfb8kfaxajcp5qt \
	I0116 03:18:39.972743 1011955 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:2b2d63eda0b757db09586d44c0338fc83976b062d99727312f950d3846e4844b \
	I0116 03:18:39.972782 1011955 kubeadm.go:322] 	--control-plane 
	I0116 03:18:39.972805 1011955 kubeadm.go:322] 
	I0116 03:18:39.972924 1011955 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0116 03:18:39.972942 1011955 kubeadm.go:322] 
	I0116 03:18:39.973047 1011955 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8444 --token 8fntor.yrfb8kfaxajcp5qt \
	I0116 03:18:39.973210 1011955 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:2b2d63eda0b757db09586d44c0338fc83976b062d99727312f950d3846e4844b 
	I0116 03:18:39.974532 1011955 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0116 03:18:39.974577 1011955 cni.go:84] Creating CNI manager for ""
	I0116 03:18:39.974604 1011955 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 03:18:39.976623 1011955 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0116 03:18:38.616520 1011681 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0116 03:18:38.639990 1011681 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0116 03:18:38.666967 1011681 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0116 03:18:38.667168 1011681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:38.667280 1011681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=880ec6d67bbc7fe8882aaa6543d5a07427c973fd minikube.k8s.io/name=old-k8s-version-788237 minikube.k8s.io/updated_at=2024_01_16T03_18_38_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:38.688522 1011681 ops.go:34] apiserver oom_adj: -16
	I0116 03:18:38.976096 1011681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:39.476978 1011681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:39.976086 1011681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:39.977876 1011955 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0116 03:18:40.005273 1011955 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0116 03:18:40.087713 1011955 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0116 03:18:40.087863 1011955 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:40.087863 1011955 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=880ec6d67bbc7fe8882aaa6543d5a07427c973fd minikube.k8s.io/name=default-k8s-diff-port-775571 minikube.k8s.io/updated_at=2024_01_16T03_18_40_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:40.168057 1011955 ops.go:34] apiserver oom_adj: -16
	I0116 03:18:40.492375 1011955 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:39.331115 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:18:41.332298 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:18:40.476064 1011681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:40.977085 1011681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:41.476706 1011681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:41.976429 1011681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:42.476172 1011681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:42.976176 1011681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:43.476449 1011681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:43.977056 1011681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:44.476761 1011681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:44.976151 1011681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:40.992990 1011955 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:41.492564 1011955 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:41.992578 1011955 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:42.493062 1011955 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:42.993372 1011955 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:43.493473 1011955 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:43.993319 1011955 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:44.493019 1011955 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:44.993411 1011955 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:45.492880 1011955 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:43.832198 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:18:44.824162 1011460 pod_ready.go:81] duration metric: took 4m0.000326915s waiting for pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace to be "Ready" ...
	E0116 03:18:44.824195 1011460 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace to be "Ready" (will not retry!)
	I0116 03:18:44.824281 1011460 pod_ready.go:38] duration metric: took 4m12.556069814s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:18:44.824351 1011460 kubeadm.go:640] restartCluster took 4m33.151422709s
	W0116 03:18:44.824438 1011460 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0116 03:18:44.824479 1011460 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0116 03:18:45.476629 1011681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:45.977106 1011681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:46.476146 1011681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:46.977113 1011681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:47.476693 1011681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:47.976945 1011681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:48.477170 1011681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:48.976394 1011681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:49.476848 1011681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:49.976797 1011681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:45.993346 1011955 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:46.493256 1011955 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:46.993006 1011955 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:47.492403 1011955 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:47.992813 1011955 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:48.493940 1011955 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:48.992944 1011955 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:49.493490 1011955 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:49.993389 1011955 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:50.492678 1011955 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:50.992627 1011955 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:51.493472 1011955 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:51.993052 1011955 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:52.492430 1011955 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:52.646080 1011955 kubeadm.go:1088] duration metric: took 12.558292993s to wait for elevateKubeSystemPrivileges.
	I0116 03:18:52.646138 1011955 kubeadm.go:406] StartCluster complete in 5m13.439862133s
	I0116 03:18:52.646169 1011955 settings.go:142] acquiring lock: {Name:mk39c69fb5454efd5afab021c02c3bec1f1b4e6b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:18:52.646281 1011955 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17967-971255/kubeconfig
	I0116 03:18:52.648500 1011955 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17967-971255/kubeconfig: {Name:mk5ca3018274b0a03599cf3631785c0629f81a67 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:18:52.648860 1011955 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0116 03:18:52.648869 1011955 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0116 03:18:52.648980 1011955 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-775571"
	I0116 03:18:52.649003 1011955 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-775571"
	I0116 03:18:52.649005 1011955 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-775571"
	I0116 03:18:52.649029 1011955 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-775571"
	I0116 03:18:52.649034 1011955 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-775571"
	W0116 03:18:52.649043 1011955 addons.go:243] addon metrics-server should already be in state true
	I0116 03:18:52.649114 1011955 config.go:182] Loaded profile config "default-k8s-diff-port-775571": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 03:18:52.649008 1011955 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-775571"
	I0116 03:18:52.649130 1011955 host.go:66] Checking if "default-k8s-diff-port-775571" exists ...
	W0116 03:18:52.649149 1011955 addons.go:243] addon storage-provisioner should already be in state true
	I0116 03:18:52.649212 1011955 host.go:66] Checking if "default-k8s-diff-port-775571" exists ...
	I0116 03:18:52.649529 1011955 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:18:52.649563 1011955 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:18:52.649529 1011955 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:18:52.649613 1011955 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:18:52.649660 1011955 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:18:52.649697 1011955 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:18:52.666073 1011955 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40865
	I0116 03:18:52.666727 1011955 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:18:52.666879 1011955 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41643
	I0116 03:18:52.667406 1011955 main.go:141] libmachine: Using API Version  1
	I0116 03:18:52.667435 1011955 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:18:52.667447 1011955 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:18:52.667814 1011955 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:18:52.667985 1011955 main.go:141] libmachine: Using API Version  1
	I0116 03:18:52.668015 1011955 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:18:52.668030 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetState
	I0116 03:18:52.668373 1011955 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:18:52.668745 1011955 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33307
	I0116 03:18:52.668995 1011955 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:18:52.669057 1011955 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:18:52.669205 1011955 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:18:52.669742 1011955 main.go:141] libmachine: Using API Version  1
	I0116 03:18:52.669767 1011955 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:18:52.670181 1011955 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:18:52.670725 1011955 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:18:52.670760 1011955 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:18:52.672109 1011955 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-775571"
	W0116 03:18:52.672134 1011955 addons.go:243] addon default-storageclass should already be in state true
	I0116 03:18:52.672165 1011955 host.go:66] Checking if "default-k8s-diff-port-775571" exists ...
	I0116 03:18:52.672575 1011955 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:18:52.672630 1011955 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:18:52.687775 1011955 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41925
	I0116 03:18:52.689625 1011955 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37799
	I0116 03:18:52.689778 1011955 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:18:52.690073 1011955 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:18:52.690203 1011955 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41865
	I0116 03:18:52.690460 1011955 main.go:141] libmachine: Using API Version  1
	I0116 03:18:52.690473 1011955 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:18:52.690742 1011955 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:18:52.690859 1011955 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:18:52.691055 1011955 main.go:141] libmachine: Using API Version  1
	I0116 03:18:52.691067 1011955 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:18:52.691409 1011955 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:18:52.691627 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetState
	I0116 03:18:52.692030 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetState
	I0116 03:18:52.693938 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .DriverName
	I0116 03:18:52.696389 1011955 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0116 03:18:52.694587 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .DriverName
	I0116 03:18:52.694891 1011955 main.go:141] libmachine: Using API Version  1
	I0116 03:18:52.698046 1011955 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:18:52.698164 1011955 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0116 03:18:52.698189 1011955 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0116 03:18:52.698218 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetSSHHostname
	I0116 03:18:52.700172 1011955 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 03:18:52.701996 1011955 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 03:18:52.702018 1011955 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0116 03:18:52.702043 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetSSHHostname
	I0116 03:18:52.702058 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | domain default-k8s-diff-port-775571 has defined MAC address 52:54:00:4b:bc:45 in network mk-default-k8s-diff-port-775571
	I0116 03:18:52.699885 1011955 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:18:52.702560 1011955 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:18:52.702602 1011955 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:18:52.702805 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:bc:45", ip: ""} in network mk-default-k8s-diff-port-775571: {Iface:virbr4 ExpiryTime:2024-01-16 04:13:23 +0000 UTC Type:0 Mac:52:54:00:4b:bc:45 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-775571 Clientid:01:52:54:00:4b:bc:45}
	I0116 03:18:52.702820 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | domain default-k8s-diff-port-775571 has defined IP address 192.168.72.158 and MAC address 52:54:00:4b:bc:45 in network mk-default-k8s-diff-port-775571
	I0116 03:18:52.702870 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetSSHPort
	I0116 03:18:52.703094 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetSSHKeyPath
	I0116 03:18:52.703363 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetSSHUsername
	I0116 03:18:52.703544 1011955 sshutil.go:53] new ssh client: &{IP:192.168.72.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/default-k8s-diff-port-775571/id_rsa Username:docker}
	I0116 03:18:52.705663 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | domain default-k8s-diff-port-775571 has defined MAC address 52:54:00:4b:bc:45 in network mk-default-k8s-diff-port-775571
	I0116 03:18:52.706131 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:bc:45", ip: ""} in network mk-default-k8s-diff-port-775571: {Iface:virbr4 ExpiryTime:2024-01-16 04:13:23 +0000 UTC Type:0 Mac:52:54:00:4b:bc:45 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-775571 Clientid:01:52:54:00:4b:bc:45}
	I0116 03:18:52.706164 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | domain default-k8s-diff-port-775571 has defined IP address 192.168.72.158 and MAC address 52:54:00:4b:bc:45 in network mk-default-k8s-diff-port-775571
	I0116 03:18:52.706417 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetSSHPort
	I0116 03:18:52.706587 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetSSHKeyPath
	I0116 03:18:52.706758 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetSSHUsername
	I0116 03:18:52.706916 1011955 sshutil.go:53] new ssh client: &{IP:192.168.72.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/default-k8s-diff-port-775571/id_rsa Username:docker}
	I0116 03:18:52.725464 1011955 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39049
	I0116 03:18:52.726113 1011955 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:18:52.726781 1011955 main.go:141] libmachine: Using API Version  1
	I0116 03:18:52.726824 1011955 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:18:52.727253 1011955 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:18:52.727482 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetState
	I0116 03:18:52.729485 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .DriverName
	I0116 03:18:52.729789 1011955 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0116 03:18:52.729823 1011955 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0116 03:18:52.729848 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetSSHHostname
	I0116 03:18:52.732669 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | domain default-k8s-diff-port-775571 has defined MAC address 52:54:00:4b:bc:45 in network mk-default-k8s-diff-port-775571
	I0116 03:18:52.733121 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:bc:45", ip: ""} in network mk-default-k8s-diff-port-775571: {Iface:virbr4 ExpiryTime:2024-01-16 04:13:23 +0000 UTC Type:0 Mac:52:54:00:4b:bc:45 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-775571 Clientid:01:52:54:00:4b:bc:45}
	I0116 03:18:52.733142 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | domain default-k8s-diff-port-775571 has defined IP address 192.168.72.158 and MAC address 52:54:00:4b:bc:45 in network mk-default-k8s-diff-port-775571
	I0116 03:18:52.733351 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetSSHPort
	I0116 03:18:52.733557 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetSSHKeyPath
	I0116 03:18:52.733766 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetSSHUsername
	I0116 03:18:52.733963 1011955 sshutil.go:53] new ssh client: &{IP:192.168.72.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/default-k8s-diff-port-775571/id_rsa Username:docker}
	I0116 03:18:52.873193 1011955 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0116 03:18:52.909098 1011955 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0116 03:18:52.909141 1011955 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0116 03:18:52.941709 1011955 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 03:18:52.942443 1011955 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0116 03:18:52.966702 1011955 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0116 03:18:52.966736 1011955 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0116 03:18:53.020737 1011955 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0116 03:18:53.020823 1011955 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0116 03:18:53.066186 1011955 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0116 03:18:53.170342 1011955 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-775571" context rescaled to 1 replicas
	I0116 03:18:53.170433 1011955 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.158 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0116 03:18:53.172678 1011955 out.go:177] * Verifying Kubernetes components...
	I0116 03:18:50.476090 1011681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:50.976173 1011681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:51.476673 1011681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:51.976165 1011681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:52.476238 1011681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:52.976850 1011681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:53.476943 1011681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:53.686011 1011681 kubeadm.go:1088] duration metric: took 15.018895956s to wait for elevateKubeSystemPrivileges.
	I0116 03:18:53.686052 1011681 kubeadm.go:406] StartCluster complete in 5m35.06362605s
	I0116 03:18:53.686080 1011681 settings.go:142] acquiring lock: {Name:mk39c69fb5454efd5afab021c02c3bec1f1b4e6b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:18:53.686180 1011681 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17967-971255/kubeconfig
	I0116 03:18:53.688860 1011681 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17967-971255/kubeconfig: {Name:mk5ca3018274b0a03599cf3631785c0629f81a67 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:18:53.689175 1011681 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0116 03:18:53.689247 1011681 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0116 03:18:53.689333 1011681 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-788237"
	I0116 03:18:53.689349 1011681 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-788237"
	I0116 03:18:53.689364 1011681 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-788237"
	I0116 03:18:53.689377 1011681 addons.go:234] Setting addon metrics-server=true in "old-k8s-version-788237"
	W0116 03:18:53.689389 1011681 addons.go:243] addon metrics-server should already be in state true
	I0116 03:18:53.689436 1011681 config.go:182] Loaded profile config "old-k8s-version-788237": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0116 03:18:53.689455 1011681 host.go:66] Checking if "old-k8s-version-788237" exists ...
	I0116 03:18:53.689378 1011681 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-788237"
	I0116 03:18:53.689357 1011681 addons.go:234] Setting addon storage-provisioner=true in "old-k8s-version-788237"
	W0116 03:18:53.689599 1011681 addons.go:243] addon storage-provisioner should already be in state true
	I0116 03:18:53.689645 1011681 host.go:66] Checking if "old-k8s-version-788237" exists ...
	I0116 03:18:53.689901 1011681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:18:53.689924 1011681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:18:53.689924 1011681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:18:53.689950 1011681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:18:53.690144 1011681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:18:53.690180 1011681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:18:53.711157 1011681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37709
	I0116 03:18:53.713950 1011681 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:18:53.714211 1011681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38563
	I0116 03:18:53.714552 1011681 main.go:141] libmachine: Using API Version  1
	I0116 03:18:53.714576 1011681 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:18:53.714663 1011681 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:18:53.715012 1011681 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:18:53.715181 1011681 main.go:141] libmachine: Using API Version  1
	I0116 03:18:53.715199 1011681 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:18:53.715683 1011681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:18:53.715710 1011681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:18:53.716263 1011681 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:18:53.716605 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetState
	I0116 03:18:53.720570 1011681 addons.go:234] Setting addon default-storageclass=true in "old-k8s-version-788237"
	W0116 03:18:53.720598 1011681 addons.go:243] addon default-storageclass should already be in state true
	I0116 03:18:53.720630 1011681 host.go:66] Checking if "old-k8s-version-788237" exists ...
	I0116 03:18:53.721140 1011681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:18:53.721183 1011681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:18:53.724181 1011681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40403
	I0116 03:18:53.724763 1011681 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:18:53.725334 1011681 main.go:141] libmachine: Using API Version  1
	I0116 03:18:53.725364 1011681 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:18:53.725737 1011681 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:18:53.726313 1011681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:18:53.726362 1011681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:18:53.737615 1011681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46761
	I0116 03:18:53.738167 1011681 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:18:53.738714 1011681 main.go:141] libmachine: Using API Version  1
	I0116 03:18:53.738739 1011681 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:18:53.739154 1011681 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:18:53.739431 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetState
	I0116 03:18:53.741559 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .DriverName
	I0116 03:18:53.741765 1011681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41413
	I0116 03:18:53.744019 1011681 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 03:18:53.745656 1011681 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 03:18:53.745691 1011681 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0116 03:18:53.745718 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetSSHHostname
	I0116 03:18:53.745868 1011681 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:18:53.746513 1011681 main.go:141] libmachine: Using API Version  1
	I0116 03:18:53.746535 1011681 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:18:53.746969 1011681 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:18:53.747587 1011681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:18:53.747621 1011681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:18:53.749923 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | domain old-k8s-version-788237 has defined MAC address 52:54:00:64:b7:2e in network mk-old-k8s-version-788237
	I0116 03:18:53.749959 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:b7:2e", ip: ""} in network mk-old-k8s-version-788237: {Iface:virbr3 ExpiryTime:2024-01-16 04:13:01 +0000 UTC Type:0 Mac:52:54:00:64:b7:2e Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:old-k8s-version-788237 Clientid:01:52:54:00:64:b7:2e}
	I0116 03:18:53.749982 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | domain old-k8s-version-788237 has defined IP address 192.168.39.91 and MAC address 52:54:00:64:b7:2e in network mk-old-k8s-version-788237
	I0116 03:18:53.750294 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetSSHPort
	I0116 03:18:53.750501 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetSSHKeyPath
	I0116 03:18:53.750814 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetSSHUsername
	I0116 03:18:53.751535 1011681 sshutil.go:53] new ssh client: &{IP:192.168.39.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/old-k8s-version-788237/id_rsa Username:docker}
	I0116 03:18:53.755634 1011681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43429
	I0116 03:18:53.756246 1011681 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:18:53.756894 1011681 main.go:141] libmachine: Using API Version  1
	I0116 03:18:53.756918 1011681 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:18:53.761942 1011681 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:18:53.765938 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetState
	I0116 03:18:53.769965 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .DriverName
	I0116 03:18:53.770273 1011681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40811
	I0116 03:18:53.770837 1011681 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:18:53.772568 1011681 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0116 03:18:53.771317 1011681 main.go:141] libmachine: Using API Version  1
	I0116 03:18:53.774128 1011681 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0116 03:18:53.772620 1011681 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:18:53.774150 1011681 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0116 03:18:53.774254 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetSSHHostname
	I0116 03:18:53.774578 1011681 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:18:53.775367 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetState
	I0116 03:18:53.778662 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | domain old-k8s-version-788237 has defined MAC address 52:54:00:64:b7:2e in network mk-old-k8s-version-788237
	I0116 03:18:53.778671 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .DriverName
	I0116 03:18:53.778694 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:b7:2e", ip: ""} in network mk-old-k8s-version-788237: {Iface:virbr3 ExpiryTime:2024-01-16 04:13:01 +0000 UTC Type:0 Mac:52:54:00:64:b7:2e Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:old-k8s-version-788237 Clientid:01:52:54:00:64:b7:2e}
	I0116 03:18:53.778716 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | domain old-k8s-version-788237 has defined IP address 192.168.39.91 and MAC address 52:54:00:64:b7:2e in network mk-old-k8s-version-788237
	I0116 03:18:53.781111 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetSSHPort
	I0116 03:18:53.781144 1011681 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0116 03:18:53.781161 1011681 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0116 03:18:53.781185 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetSSHHostname
	I0116 03:18:53.781359 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetSSHKeyPath
	I0116 03:18:53.781509 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetSSHUsername
	I0116 03:18:53.781647 1011681 sshutil.go:53] new ssh client: &{IP:192.168.39.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/old-k8s-version-788237/id_rsa Username:docker}
	I0116 03:18:53.784375 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | domain old-k8s-version-788237 has defined MAC address 52:54:00:64:b7:2e in network mk-old-k8s-version-788237
	I0116 03:18:53.784817 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:b7:2e", ip: ""} in network mk-old-k8s-version-788237: {Iface:virbr3 ExpiryTime:2024-01-16 04:13:01 +0000 UTC Type:0 Mac:52:54:00:64:b7:2e Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:old-k8s-version-788237 Clientid:01:52:54:00:64:b7:2e}
	I0116 03:18:53.784841 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | domain old-k8s-version-788237 has defined IP address 192.168.39.91 and MAC address 52:54:00:64:b7:2e in network mk-old-k8s-version-788237
	I0116 03:18:53.785021 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetSSHPort
	I0116 03:18:53.785248 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetSSHKeyPath
	I0116 03:18:53.785367 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetSSHUsername
	I0116 03:18:53.785586 1011681 sshutil.go:53] new ssh client: &{IP:192.168.39.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/old-k8s-version-788237/id_rsa Username:docker}
	I0116 03:18:53.920099 1011681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 03:18:53.964232 1011681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0116 03:18:53.983575 1011681 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0116 03:18:54.005702 1011681 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0116 03:18:54.005736 1011681 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0116 03:18:54.084574 1011681 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0116 03:18:54.084606 1011681 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0116 03:18:54.143597 1011681 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0116 03:18:54.143640 1011681 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0116 03:18:54.195269 1011681 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-788237" context rescaled to 1 replicas
	I0116 03:18:54.195324 1011681 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.91 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0116 03:18:54.197378 1011681 out.go:177] * Verifying Kubernetes components...
	I0116 03:18:54.198806 1011681 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 03:18:54.323439 1011681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0116 03:18:55.133484 1011681 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.169208691s)
	I0116 03:18:55.133595 1011681 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-788237" to be "Ready" ...
	I0116 03:18:55.133486 1011681 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.213323807s)
	I0116 03:18:55.133650 1011681 main.go:141] libmachine: Making call to close driver server
	I0116 03:18:55.133664 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .Close
	I0116 03:18:55.133531 1011681 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.149922539s)
	I0116 03:18:55.133873 1011681 start.go:929] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0116 03:18:55.133967 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | Closing plugin on server side
	I0116 03:18:55.133609 1011681 main.go:141] libmachine: Making call to close driver server
	I0116 03:18:55.133993 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .Close
	I0116 03:18:55.134363 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | Closing plugin on server side
	I0116 03:18:55.134402 1011681 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:18:55.134415 1011681 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:18:55.134426 1011681 main.go:141] libmachine: Making call to close driver server
	I0116 03:18:55.134439 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .Close
	I0116 03:18:55.134750 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | Closing plugin on server side
	I0116 03:18:55.134766 1011681 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:18:55.134781 1011681 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:18:55.135982 1011681 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:18:55.136002 1011681 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:18:55.136014 1011681 main.go:141] libmachine: Making call to close driver server
	I0116 03:18:55.136046 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .Close
	I0116 03:18:55.136623 1011681 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:18:55.136656 1011681 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:18:53.174208 1011955 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 03:18:54.899603 1011955 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.026351829s)
	I0116 03:18:54.899706 1011955 start.go:929] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I0116 03:18:55.340175 1011955 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.397688954s)
	I0116 03:18:55.340238 1011955 main.go:141] libmachine: Making call to close driver server
	I0116 03:18:55.340252 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .Close
	I0116 03:18:55.340413 1011955 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.398670161s)
	I0116 03:18:55.340439 1011955 main.go:141] libmachine: Making call to close driver server
	I0116 03:18:55.340449 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .Close
	I0116 03:18:55.344833 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | Closing plugin on server side
	I0116 03:18:55.344839 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | Closing plugin on server side
	I0116 03:18:55.344858 1011955 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:18:55.344858 1011955 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:18:55.344871 1011955 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:18:55.344877 1011955 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:18:55.344886 1011955 main.go:141] libmachine: Making call to close driver server
	I0116 03:18:55.344889 1011955 main.go:141] libmachine: Making call to close driver server
	I0116 03:18:55.344897 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .Close
	I0116 03:18:55.344899 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .Close
	I0116 03:18:55.345154 1011955 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:18:55.345172 1011955 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:18:55.345207 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | Closing plugin on server side
	I0116 03:18:55.345229 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | Closing plugin on server side
	I0116 03:18:55.345311 1011955 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:18:55.345328 1011955 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:18:55.411967 1011955 main.go:141] libmachine: Making call to close driver server
	I0116 03:18:55.412006 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .Close
	I0116 03:18:55.412382 1011955 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:18:55.412402 1011955 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:18:55.229555 1011681 node_ready.go:49] node "old-k8s-version-788237" has status "Ready":"True"
	I0116 03:18:55.229641 1011681 node_ready.go:38] duration metric: took 95.965741ms waiting for node "old-k8s-version-788237" to be "Ready" ...
	I0116 03:18:55.229667 1011681 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:18:55.290235 1011681 main.go:141] libmachine: Making call to close driver server
	I0116 03:18:55.290288 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .Close
	I0116 03:18:55.290652 1011681 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:18:55.290675 1011681 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:18:55.311952 1011681 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-qmzl6" in "kube-system" namespace to be "Ready" ...
	I0116 03:18:55.886230 1011681 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.562731329s)
	I0116 03:18:55.886302 1011681 main.go:141] libmachine: Making call to close driver server
	I0116 03:18:55.886324 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .Close
	I0116 03:18:55.886813 1011681 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:18:55.886840 1011681 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:18:55.886852 1011681 main.go:141] libmachine: Making call to close driver server
	I0116 03:18:55.886863 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .Close
	I0116 03:18:55.889105 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | Closing plugin on server side
	I0116 03:18:55.889151 1011681 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:18:55.889160 1011681 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:18:55.889171 1011681 addons.go:470] Verifying addon metrics-server=true in "old-k8s-version-788237"
	I0116 03:18:55.891206 1011681 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0116 03:18:55.952771 1011955 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.778522731s)
	I0116 03:18:55.952832 1011955 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-775571" to be "Ready" ...
	I0116 03:18:55.953294 1011955 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.887054667s)
	I0116 03:18:55.953343 1011955 main.go:141] libmachine: Making call to close driver server
	I0116 03:18:55.953359 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .Close
	I0116 03:18:55.956009 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | Closing plugin on server side
	I0116 03:18:55.956050 1011955 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:18:55.956072 1011955 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:18:55.956095 1011955 main.go:141] libmachine: Making call to close driver server
	I0116 03:18:55.956106 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .Close
	I0116 03:18:55.956401 1011955 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:18:55.956417 1011955 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:18:55.956428 1011955 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-775571"
	I0116 03:18:55.959261 1011955 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0116 03:18:55.893233 1011681 addons.go:505] enable addons completed in 2.203983589s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0116 03:18:57.320945 1011681 pod_ready.go:102] pod "coredns-5644d7b6d9-qmzl6" in "kube-system" namespace has status "Ready":"False"
	I0116 03:18:59.825898 1011681 pod_ready.go:102] pod "coredns-5644d7b6d9-qmzl6" in "kube-system" namespace has status "Ready":"False"
	I0116 03:18:55.960681 1011955 addons.go:505] enable addons completed in 3.311813314s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0116 03:18:55.983312 1011955 node_ready.go:49] node "default-k8s-diff-port-775571" has status "Ready":"True"
	I0116 03:18:55.983350 1011955 node_ready.go:38] duration metric: took 30.503183ms waiting for node "default-k8s-diff-port-775571" to be "Ready" ...
	I0116 03:18:55.983366 1011955 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:18:56.004432 1011955 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-mk795" in "kube-system" namespace to be "Ready" ...
	I0116 03:18:56.513965 1011955 pod_ready.go:92] pod "coredns-5dd5756b68-mk795" in "kube-system" namespace has status "Ready":"True"
	I0116 03:18:56.514083 1011955 pod_ready.go:81] duration metric: took 509.611409ms waiting for pod "coredns-5dd5756b68-mk795" in "kube-system" namespace to be "Ready" ...
	I0116 03:18:56.514148 1011955 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-775571" in "kube-system" namespace to be "Ready" ...
	I0116 03:18:56.524671 1011955 pod_ready.go:92] pod "etcd-default-k8s-diff-port-775571" in "kube-system" namespace has status "Ready":"True"
	I0116 03:18:56.524770 1011955 pod_ready.go:81] duration metric: took 10.59132ms waiting for pod "etcd-default-k8s-diff-port-775571" in "kube-system" namespace to be "Ready" ...
	I0116 03:18:56.524803 1011955 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-775571" in "kube-system" namespace to be "Ready" ...
	I0116 03:18:56.538471 1011955 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-775571" in "kube-system" namespace has status "Ready":"True"
	I0116 03:18:56.538581 1011955 pod_ready.go:81] duration metric: took 13.724762ms waiting for pod "kube-apiserver-default-k8s-diff-port-775571" in "kube-system" namespace to be "Ready" ...
	I0116 03:18:56.538616 1011955 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-775571" in "kube-system" namespace to be "Ready" ...
	I0116 03:18:56.549389 1011955 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-775571" in "kube-system" namespace has status "Ready":"True"
	I0116 03:18:56.549494 1011955 pod_ready.go:81] duration metric: took 10.835015ms waiting for pod "kube-controller-manager-default-k8s-diff-port-775571" in "kube-system" namespace to be "Ready" ...
	I0116 03:18:56.549524 1011955 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zw495" in "kube-system" namespace to be "Ready" ...
	I0116 03:18:56.757971 1011955 pod_ready.go:92] pod "kube-proxy-zw495" in "kube-system" namespace has status "Ready":"True"
	I0116 03:18:56.758009 1011955 pod_ready.go:81] duration metric: took 208.445706ms waiting for pod "kube-proxy-zw495" in "kube-system" namespace to be "Ready" ...
	I0116 03:18:56.758024 1011955 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-775571" in "kube-system" namespace to be "Ready" ...
	I0116 03:18:57.156938 1011955 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-775571" in "kube-system" namespace has status "Ready":"True"
	I0116 03:18:57.156972 1011955 pod_ready.go:81] duration metric: took 398.939705ms waiting for pod "kube-scheduler-default-k8s-diff-port-775571" in "kube-system" namespace to be "Ready" ...
	I0116 03:18:57.156983 1011955 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace to be "Ready" ...
	I0116 03:18:59.164487 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:18:59.818244 1011460 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (14.993735667s)
	I0116 03:18:59.818326 1011460 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 03:18:59.833153 1011460 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0116 03:18:59.842806 1011460 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0116 03:18:59.851950 1011460 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0116 03:18:59.852010 1011460 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0116 03:19:00.070447 1011460 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0116 03:19:00.320286 1011681 pod_ready.go:92] pod "coredns-5644d7b6d9-qmzl6" in "kube-system" namespace has status "Ready":"True"
	I0116 03:19:00.320320 1011681 pod_ready.go:81] duration metric: took 5.0083337s waiting for pod "coredns-5644d7b6d9-qmzl6" in "kube-system" namespace to be "Ready" ...
	I0116 03:19:00.320333 1011681 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tv7gz" in "kube-system" namespace to be "Ready" ...
	I0116 03:19:00.326637 1011681 pod_ready.go:92] pod "kube-proxy-tv7gz" in "kube-system" namespace has status "Ready":"True"
	I0116 03:19:00.326664 1011681 pod_ready.go:81] duration metric: took 6.322991ms waiting for pod "kube-proxy-tv7gz" in "kube-system" namespace to be "Ready" ...
	I0116 03:19:00.326677 1011681 pod_ready.go:38] duration metric: took 5.096991549s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:19:00.326699 1011681 api_server.go:52] waiting for apiserver process to appear ...
	I0116 03:19:00.326772 1011681 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:19:00.343804 1011681 api_server.go:72] duration metric: took 6.148440288s to wait for apiserver process to appear ...
	I0116 03:19:00.343832 1011681 api_server.go:88] waiting for apiserver healthz status ...
	I0116 03:19:00.343855 1011681 api_server.go:253] Checking apiserver healthz at https://192.168.39.91:8443/healthz ...
	I0116 03:19:00.351105 1011681 api_server.go:279] https://192.168.39.91:8443/healthz returned 200:
	ok
	I0116 03:19:00.352195 1011681 api_server.go:141] control plane version: v1.16.0
	I0116 03:19:00.352263 1011681 api_server.go:131] duration metric: took 8.420277ms to wait for apiserver health ...
	I0116 03:19:00.352283 1011681 system_pods.go:43] waiting for kube-system pods to appear ...
	I0116 03:19:00.361924 1011681 system_pods.go:59] 4 kube-system pods found
	I0116 03:19:00.361952 1011681 system_pods.go:61] "coredns-5644d7b6d9-qmzl6" [3e4b23ca-c18b-4158-b15b-df53326b384c] Running
	I0116 03:19:00.361957 1011681 system_pods.go:61] "kube-proxy-tv7gz" [a1bf5e59-b2ae-489c-8297-25ad7c456303] Running
	I0116 03:19:00.361963 1011681 system_pods.go:61] "metrics-server-74d5856cc6-tx8jt" [790d860a-d27f-4535-94d8-64f40cb79071] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:19:00.361968 1011681 system_pods.go:61] "storage-provisioner" [aa5fac91-1606-4716-a04a-18e9d80c926b] Running
	I0116 03:19:00.361977 1011681 system_pods.go:74] duration metric: took 9.67913ms to wait for pod list to return data ...
	I0116 03:19:00.361987 1011681 default_sa.go:34] waiting for default service account to be created ...
	I0116 03:19:00.364600 1011681 default_sa.go:45] found service account: "default"
	I0116 03:19:00.364630 1011681 default_sa.go:55] duration metric: took 2.635157ms for default service account to be created ...
	I0116 03:19:00.364642 1011681 system_pods.go:116] waiting for k8s-apps to be running ...
	I0116 03:19:00.368386 1011681 system_pods.go:86] 4 kube-system pods found
	I0116 03:19:00.368409 1011681 system_pods.go:89] "coredns-5644d7b6d9-qmzl6" [3e4b23ca-c18b-4158-b15b-df53326b384c] Running
	I0116 03:19:00.368416 1011681 system_pods.go:89] "kube-proxy-tv7gz" [a1bf5e59-b2ae-489c-8297-25ad7c456303] Running
	I0116 03:19:00.368423 1011681 system_pods.go:89] "metrics-server-74d5856cc6-tx8jt" [790d860a-d27f-4535-94d8-64f40cb79071] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:19:00.368430 1011681 system_pods.go:89] "storage-provisioner" [aa5fac91-1606-4716-a04a-18e9d80c926b] Running
	I0116 03:19:00.368454 1011681 retry.go:31] will retry after 285.445367ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0116 03:19:00.660996 1011681 system_pods.go:86] 4 kube-system pods found
	I0116 03:19:00.661033 1011681 system_pods.go:89] "coredns-5644d7b6d9-qmzl6" [3e4b23ca-c18b-4158-b15b-df53326b384c] Running
	I0116 03:19:00.661040 1011681 system_pods.go:89] "kube-proxy-tv7gz" [a1bf5e59-b2ae-489c-8297-25ad7c456303] Running
	I0116 03:19:00.661047 1011681 system_pods.go:89] "metrics-server-74d5856cc6-tx8jt" [790d860a-d27f-4535-94d8-64f40cb79071] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:19:00.661055 1011681 system_pods.go:89] "storage-provisioner" [aa5fac91-1606-4716-a04a-18e9d80c926b] Running
	I0116 03:19:00.661079 1011681 retry.go:31] will retry after 334.380732ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0116 03:19:01.000372 1011681 system_pods.go:86] 4 kube-system pods found
	I0116 03:19:01.000401 1011681 system_pods.go:89] "coredns-5644d7b6d9-qmzl6" [3e4b23ca-c18b-4158-b15b-df53326b384c] Running
	I0116 03:19:01.000407 1011681 system_pods.go:89] "kube-proxy-tv7gz" [a1bf5e59-b2ae-489c-8297-25ad7c456303] Running
	I0116 03:19:01.000413 1011681 system_pods.go:89] "metrics-server-74d5856cc6-tx8jt" [790d860a-d27f-4535-94d8-64f40cb79071] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:19:01.000418 1011681 system_pods.go:89] "storage-provisioner" [aa5fac91-1606-4716-a04a-18e9d80c926b] Running
	I0116 03:19:01.000437 1011681 retry.go:31] will retry after 432.029845ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0116 03:19:01.437761 1011681 system_pods.go:86] 4 kube-system pods found
	I0116 03:19:01.437794 1011681 system_pods.go:89] "coredns-5644d7b6d9-qmzl6" [3e4b23ca-c18b-4158-b15b-df53326b384c] Running
	I0116 03:19:01.437817 1011681 system_pods.go:89] "kube-proxy-tv7gz" [a1bf5e59-b2ae-489c-8297-25ad7c456303] Running
	I0116 03:19:01.437827 1011681 system_pods.go:89] "metrics-server-74d5856cc6-tx8jt" [790d860a-d27f-4535-94d8-64f40cb79071] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:19:01.437835 1011681 system_pods.go:89] "storage-provisioner" [aa5fac91-1606-4716-a04a-18e9d80c926b] Running
	I0116 03:19:01.437857 1011681 retry.go:31] will retry after 542.969865ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0116 03:19:01.985932 1011681 system_pods.go:86] 4 kube-system pods found
	I0116 03:19:01.985965 1011681 system_pods.go:89] "coredns-5644d7b6d9-qmzl6" [3e4b23ca-c18b-4158-b15b-df53326b384c] Running
	I0116 03:19:01.985970 1011681 system_pods.go:89] "kube-proxy-tv7gz" [a1bf5e59-b2ae-489c-8297-25ad7c456303] Running
	I0116 03:19:01.985977 1011681 system_pods.go:89] "metrics-server-74d5856cc6-tx8jt" [790d860a-d27f-4535-94d8-64f40cb79071] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:19:01.985984 1011681 system_pods.go:89] "storage-provisioner" [aa5fac91-1606-4716-a04a-18e9d80c926b] Running
	I0116 03:19:01.986006 1011681 retry.go:31] will retry after 682.538217ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0116 03:19:02.673234 1011681 system_pods.go:86] 4 kube-system pods found
	I0116 03:19:02.673268 1011681 system_pods.go:89] "coredns-5644d7b6d9-qmzl6" [3e4b23ca-c18b-4158-b15b-df53326b384c] Running
	I0116 03:19:02.673274 1011681 system_pods.go:89] "kube-proxy-tv7gz" [a1bf5e59-b2ae-489c-8297-25ad7c456303] Running
	I0116 03:19:02.673280 1011681 system_pods.go:89] "metrics-server-74d5856cc6-tx8jt" [790d860a-d27f-4535-94d8-64f40cb79071] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:19:02.673286 1011681 system_pods.go:89] "storage-provisioner" [aa5fac91-1606-4716-a04a-18e9d80c926b] Running
	I0116 03:19:02.673305 1011681 retry.go:31] will retry after 865.818681ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0116 03:19:03.544313 1011681 system_pods.go:86] 4 kube-system pods found
	I0116 03:19:03.544355 1011681 system_pods.go:89] "coredns-5644d7b6d9-qmzl6" [3e4b23ca-c18b-4158-b15b-df53326b384c] Running
	I0116 03:19:03.544363 1011681 system_pods.go:89] "kube-proxy-tv7gz" [a1bf5e59-b2ae-489c-8297-25ad7c456303] Running
	I0116 03:19:03.544373 1011681 system_pods.go:89] "metrics-server-74d5856cc6-tx8jt" [790d860a-d27f-4535-94d8-64f40cb79071] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:19:03.544383 1011681 system_pods.go:89] "storage-provisioner" [aa5fac91-1606-4716-a04a-18e9d80c926b] Running
	I0116 03:19:03.544407 1011681 retry.go:31] will retry after 754.732197ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0116 03:19:04.304165 1011681 system_pods.go:86] 4 kube-system pods found
	I0116 03:19:04.304205 1011681 system_pods.go:89] "coredns-5644d7b6d9-qmzl6" [3e4b23ca-c18b-4158-b15b-df53326b384c] Running
	I0116 03:19:04.304217 1011681 system_pods.go:89] "kube-proxy-tv7gz" [a1bf5e59-b2ae-489c-8297-25ad7c456303] Running
	I0116 03:19:04.304227 1011681 system_pods.go:89] "metrics-server-74d5856cc6-tx8jt" [790d860a-d27f-4535-94d8-64f40cb79071] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:19:04.304235 1011681 system_pods.go:89] "storage-provisioner" [aa5fac91-1606-4716-a04a-18e9d80c926b] Running
	I0116 03:19:04.304258 1011681 retry.go:31] will retry after 1.101452697s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0116 03:19:01.164856 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:19:03.165951 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:19:05.166097 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:19:05.411683 1011681 system_pods.go:86] 4 kube-system pods found
	I0116 03:19:05.411726 1011681 system_pods.go:89] "coredns-5644d7b6d9-qmzl6" [3e4b23ca-c18b-4158-b15b-df53326b384c] Running
	I0116 03:19:05.411736 1011681 system_pods.go:89] "kube-proxy-tv7gz" [a1bf5e59-b2ae-489c-8297-25ad7c456303] Running
	I0116 03:19:05.411750 1011681 system_pods.go:89] "metrics-server-74d5856cc6-tx8jt" [790d860a-d27f-4535-94d8-64f40cb79071] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:19:05.411758 1011681 system_pods.go:89] "storage-provisioner" [aa5fac91-1606-4716-a04a-18e9d80c926b] Running
	I0116 03:19:05.411781 1011681 retry.go:31] will retry after 1.524854445s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0116 03:19:06.941891 1011681 system_pods.go:86] 4 kube-system pods found
	I0116 03:19:06.941929 1011681 system_pods.go:89] "coredns-5644d7b6d9-qmzl6" [3e4b23ca-c18b-4158-b15b-df53326b384c] Running
	I0116 03:19:06.941939 1011681 system_pods.go:89] "kube-proxy-tv7gz" [a1bf5e59-b2ae-489c-8297-25ad7c456303] Running
	I0116 03:19:06.941949 1011681 system_pods.go:89] "metrics-server-74d5856cc6-tx8jt" [790d860a-d27f-4535-94d8-64f40cb79071] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:19:06.941957 1011681 system_pods.go:89] "storage-provisioner" [aa5fac91-1606-4716-a04a-18e9d80c926b] Running
	I0116 03:19:06.941984 1011681 retry.go:31] will retry after 1.460454781s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0116 03:19:08.408630 1011681 system_pods.go:86] 4 kube-system pods found
	I0116 03:19:08.408662 1011681 system_pods.go:89] "coredns-5644d7b6d9-qmzl6" [3e4b23ca-c18b-4158-b15b-df53326b384c] Running
	I0116 03:19:08.408668 1011681 system_pods.go:89] "kube-proxy-tv7gz" [a1bf5e59-b2ae-489c-8297-25ad7c456303] Running
	I0116 03:19:08.408687 1011681 system_pods.go:89] "metrics-server-74d5856cc6-tx8jt" [790d860a-d27f-4535-94d8-64f40cb79071] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:19:08.408692 1011681 system_pods.go:89] "storage-provisioner" [aa5fac91-1606-4716-a04a-18e9d80c926b] Running
	I0116 03:19:08.408713 1011681 retry.go:31] will retry after 1.769662932s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0116 03:19:10.184053 1011681 system_pods.go:86] 4 kube-system pods found
	I0116 03:19:10.184081 1011681 system_pods.go:89] "coredns-5644d7b6d9-qmzl6" [3e4b23ca-c18b-4158-b15b-df53326b384c] Running
	I0116 03:19:10.184086 1011681 system_pods.go:89] "kube-proxy-tv7gz" [a1bf5e59-b2ae-489c-8297-25ad7c456303] Running
	I0116 03:19:10.184093 1011681 system_pods.go:89] "metrics-server-74d5856cc6-tx8jt" [790d860a-d27f-4535-94d8-64f40cb79071] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:19:10.184098 1011681 system_pods.go:89] "storage-provisioner" [aa5fac91-1606-4716-a04a-18e9d80c926b] Running
	I0116 03:19:10.184117 1011681 retry.go:31] will retry after 3.059139s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0116 03:19:07.169102 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:19:09.666541 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:19:11.938237 1011460 kubeadm.go:322] [init] Using Kubernetes version: v1.29.0-rc.2
	I0116 03:19:11.938354 1011460 kubeadm.go:322] [preflight] Running pre-flight checks
	I0116 03:19:11.938572 1011460 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0116 03:19:11.939095 1011460 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0116 03:19:11.939269 1011460 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0116 03:19:11.939370 1011460 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0116 03:19:11.941237 1011460 out.go:204]   - Generating certificates and keys ...
	I0116 03:19:11.941348 1011460 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0116 03:19:11.941482 1011460 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0116 03:19:11.941579 1011460 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0116 03:19:11.941646 1011460 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0116 03:19:11.941733 1011460 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0116 03:19:11.941821 1011460 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0116 03:19:11.941908 1011460 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0116 03:19:11.941959 1011460 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0116 03:19:11.942018 1011460 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0116 03:19:11.942114 1011460 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0116 03:19:11.942208 1011460 kubeadm.go:322] [certs] Using the existing "sa" key
	I0116 03:19:11.942278 1011460 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0116 03:19:11.942348 1011460 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0116 03:19:11.942424 1011460 kubeadm.go:322] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0116 03:19:11.942487 1011460 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0116 03:19:11.942579 1011460 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0116 03:19:11.942659 1011460 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0116 03:19:11.942779 1011460 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0116 03:19:11.942856 1011460 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0116 03:19:11.944468 1011460 out.go:204]   - Booting up control plane ...
	I0116 03:19:11.944556 1011460 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0116 03:19:11.944624 1011460 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0116 03:19:11.944694 1011460 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0116 03:19:11.944847 1011460 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0116 03:19:11.944975 1011460 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0116 03:19:11.945039 1011460 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0116 03:19:11.945209 1011460 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0116 03:19:11.945282 1011460 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.502907 seconds
	I0116 03:19:11.945373 1011460 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0116 03:19:11.945476 1011460 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0116 03:19:11.945541 1011460 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0116 03:19:11.945750 1011460 kubeadm.go:322] [mark-control-plane] Marking the node no-preload-934668 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0116 03:19:11.945823 1011460 kubeadm.go:322] [bootstrap-token] Using token: pj08z0.5ut3mf4afujawh3s
	I0116 03:19:11.947396 1011460 out.go:204]   - Configuring RBAC rules ...
	I0116 03:19:11.947532 1011460 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0116 03:19:11.947645 1011460 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0116 03:19:11.947822 1011460 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0116 03:19:11.948000 1011460 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0116 03:19:11.948094 1011460 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0116 03:19:11.948182 1011460 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0116 03:19:11.948281 1011460 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0116 03:19:11.948327 1011460 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0116 03:19:11.948373 1011460 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0116 03:19:11.948383 1011460 kubeadm.go:322] 
	I0116 03:19:11.948440 1011460 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0116 03:19:11.948449 1011460 kubeadm.go:322] 
	I0116 03:19:11.948546 1011460 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0116 03:19:11.948567 1011460 kubeadm.go:322] 
	I0116 03:19:11.948614 1011460 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0116 03:19:11.948725 1011460 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0116 03:19:11.948805 1011460 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0116 03:19:11.948815 1011460 kubeadm.go:322] 
	I0116 03:19:11.948891 1011460 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0116 03:19:11.948901 1011460 kubeadm.go:322] 
	I0116 03:19:11.948979 1011460 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0116 03:19:11.949011 1011460 kubeadm.go:322] 
	I0116 03:19:11.949086 1011460 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0116 03:19:11.949215 1011460 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0116 03:19:11.949311 1011460 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0116 03:19:11.949332 1011460 kubeadm.go:322] 
	I0116 03:19:11.949463 1011460 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0116 03:19:11.949576 1011460 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0116 03:19:11.949590 1011460 kubeadm.go:322] 
	I0116 03:19:11.949688 1011460 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token pj08z0.5ut3mf4afujawh3s \
	I0116 03:19:11.949837 1011460 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:2b2d63eda0b757db09586d44c0338fc83976b062d99727312f950d3846e4844b \
	I0116 03:19:11.949877 1011460 kubeadm.go:322] 	--control-plane 
	I0116 03:19:11.949890 1011460 kubeadm.go:322] 
	I0116 03:19:11.949997 1011460 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0116 03:19:11.950009 1011460 kubeadm.go:322] 
	I0116 03:19:11.950108 1011460 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token pj08z0.5ut3mf4afujawh3s \
	I0116 03:19:11.950232 1011460 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:2b2d63eda0b757db09586d44c0338fc83976b062d99727312f950d3846e4844b 
	I0116 03:19:11.950269 1011460 cni.go:84] Creating CNI manager for ""
	I0116 03:19:11.950284 1011460 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 03:19:11.952013 1011460 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0116 03:19:11.953373 1011460 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0116 03:19:12.016915 1011460 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0116 03:19:12.042169 1011460 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0116 03:19:12.042259 1011460 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=880ec6d67bbc7fe8882aaa6543d5a07427c973fd minikube.k8s.io/name=no-preload-934668 minikube.k8s.io/updated_at=2024_01_16T03_19_12_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:19:12.042266 1011460 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:19:12.092434 1011460 ops.go:34] apiserver oom_adj: -16
	I0116 03:19:13.250984 1011681 system_pods.go:86] 4 kube-system pods found
	I0116 03:19:13.251026 1011681 system_pods.go:89] "coredns-5644d7b6d9-qmzl6" [3e4b23ca-c18b-4158-b15b-df53326b384c] Running
	I0116 03:19:13.251035 1011681 system_pods.go:89] "kube-proxy-tv7gz" [a1bf5e59-b2ae-489c-8297-25ad7c456303] Running
	I0116 03:19:13.251046 1011681 system_pods.go:89] "metrics-server-74d5856cc6-tx8jt" [790d860a-d27f-4535-94d8-64f40cb79071] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:19:13.251054 1011681 system_pods.go:89] "storage-provisioner" [aa5fac91-1606-4716-a04a-18e9d80c926b] Running
	I0116 03:19:13.251078 1011681 retry.go:31] will retry after 3.301960932s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0116 03:19:12.168237 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:19:14.669074 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:19:12.372548 1011460 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:19:12.873171 1011460 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:19:13.372932 1011460 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:19:13.873086 1011460 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:19:14.373328 1011460 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:19:14.873249 1011460 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:19:15.372564 1011460 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:19:15.873604 1011460 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:19:16.372846 1011460 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:19:16.873652 1011460 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:19:16.558984 1011681 system_pods.go:86] 4 kube-system pods found
	I0116 03:19:16.559016 1011681 system_pods.go:89] "coredns-5644d7b6d9-qmzl6" [3e4b23ca-c18b-4158-b15b-df53326b384c] Running
	I0116 03:19:16.559023 1011681 system_pods.go:89] "kube-proxy-tv7gz" [a1bf5e59-b2ae-489c-8297-25ad7c456303] Running
	I0116 03:19:16.559031 1011681 system_pods.go:89] "metrics-server-74d5856cc6-tx8jt" [790d860a-d27f-4535-94d8-64f40cb79071] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:19:16.559036 1011681 system_pods.go:89] "storage-provisioner" [aa5fac91-1606-4716-a04a-18e9d80c926b] Running
	I0116 03:19:16.559056 1011681 retry.go:31] will retry after 4.433753761s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0116 03:19:17.166555 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:19:19.666500 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:19:17.373434 1011460 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:19:17.873591 1011460 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:19:18.373340 1011460 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:19:18.873267 1011460 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:19:19.373311 1011460 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:19:19.873538 1011460 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:19:20.372770 1011460 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:19:20.873645 1011460 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:19:21.373033 1011460 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:19:21.872773 1011460 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:19:22.372607 1011460 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:19:22.872582 1011460 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:19:23.372659 1011460 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:19:23.873410 1011460 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:19:24.372682 1011460 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:19:24.873365 1011460 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:19:24.989170 1011460 kubeadm.go:1088] duration metric: took 12.946988185s to wait for elevateKubeSystemPrivileges.
	I0116 03:19:24.989221 1011460 kubeadm.go:406] StartCluster complete in 5m13.370173315s
	I0116 03:19:24.989247 1011460 settings.go:142] acquiring lock: {Name:mk39c69fb5454efd5afab021c02c3bec1f1b4e6b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:19:24.989351 1011460 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17967-971255/kubeconfig
	I0116 03:19:24.991793 1011460 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17967-971255/kubeconfig: {Name:mk5ca3018274b0a03599cf3631785c0629f81a67 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:19:24.992117 1011460 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0116 03:19:24.992155 1011460 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0116 03:19:24.992266 1011460 addons.go:69] Setting storage-provisioner=true in profile "no-preload-934668"
	I0116 03:19:24.992274 1011460 addons.go:69] Setting default-storageclass=true in profile "no-preload-934668"
	I0116 03:19:24.992291 1011460 addons.go:234] Setting addon storage-provisioner=true in "no-preload-934668"
	I0116 03:19:24.992295 1011460 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-934668"
	I0116 03:19:24.992296 1011460 addons.go:69] Setting metrics-server=true in profile "no-preload-934668"
	I0116 03:19:24.992325 1011460 addons.go:234] Setting addon metrics-server=true in "no-preload-934668"
	W0116 03:19:24.992338 1011460 addons.go:243] addon metrics-server should already be in state true
	I0116 03:19:24.992393 1011460 config.go:182] Loaded profile config "no-preload-934668": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	W0116 03:19:24.992300 1011460 addons.go:243] addon storage-provisioner should already be in state true
	I0116 03:19:24.992415 1011460 host.go:66] Checking if "no-preload-934668" exists ...
	I0116 03:19:24.992456 1011460 host.go:66] Checking if "no-preload-934668" exists ...
	I0116 03:19:24.992754 1011460 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:19:24.992775 1011460 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:19:24.992810 1011460 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:19:24.992831 1011460 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:19:24.992871 1011460 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:19:24.992959 1011460 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:19:25.010903 1011460 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33013
	I0116 03:19:25.011636 1011460 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:19:25.012150 1011460 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39939
	I0116 03:19:25.012167 1011460 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39475
	I0116 03:19:25.012223 1011460 main.go:141] libmachine: Using API Version  1
	I0116 03:19:25.012247 1011460 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:19:25.012568 1011460 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:19:25.012669 1011460 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:19:25.012784 1011460 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:19:25.013013 1011460 main.go:141] libmachine: Using API Version  1
	I0116 03:19:25.013037 1011460 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:19:25.013189 1011460 main.go:141] libmachine: Using API Version  1
	I0116 03:19:25.013202 1011460 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:19:25.013647 1011460 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:19:25.013677 1011460 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:19:25.014037 1011460 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:19:25.014040 1011460 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:19:25.014620 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetState
	I0116 03:19:25.014622 1011460 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:19:25.014713 1011460 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:19:25.018506 1011460 addons.go:234] Setting addon default-storageclass=true in "no-preload-934668"
	W0116 03:19:25.018563 1011460 addons.go:243] addon default-storageclass should already be in state true
	I0116 03:19:25.018603 1011460 host.go:66] Checking if "no-preload-934668" exists ...
	I0116 03:19:25.019024 1011460 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:19:25.019089 1011460 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:19:25.034161 1011460 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43463
	I0116 03:19:25.034400 1011460 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40521
	I0116 03:19:25.034909 1011460 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:19:25.035027 1011460 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:19:25.035536 1011460 main.go:141] libmachine: Using API Version  1
	I0116 03:19:25.035555 1011460 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:19:25.035687 1011460 main.go:141] libmachine: Using API Version  1
	I0116 03:19:25.035698 1011460 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:19:25.036064 1011460 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:19:25.036123 1011460 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:19:25.036296 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetState
	I0116 03:19:25.036323 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetState
	I0116 03:19:25.037452 1011460 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39249
	I0116 03:19:25.038065 1011460 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:19:25.038653 1011460 main.go:141] libmachine: (no-preload-934668) Calling .DriverName
	I0116 03:19:25.038797 1011460 main.go:141] libmachine: Using API Version  1
	I0116 03:19:25.038807 1011460 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:19:25.040516 1011460 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 03:19:25.039169 1011460 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:19:25.039494 1011460 main.go:141] libmachine: (no-preload-934668) Calling .DriverName
	I0116 03:19:25.041993 1011460 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 03:19:25.042021 1011460 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0116 03:19:25.042042 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetSSHHostname
	I0116 03:19:25.043350 1011460 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0116 03:19:20.998514 1011681 system_pods.go:86] 4 kube-system pods found
	I0116 03:19:20.998541 1011681 system_pods.go:89] "coredns-5644d7b6d9-qmzl6" [3e4b23ca-c18b-4158-b15b-df53326b384c] Running
	I0116 03:19:20.998546 1011681 system_pods.go:89] "kube-proxy-tv7gz" [a1bf5e59-b2ae-489c-8297-25ad7c456303] Running
	I0116 03:19:20.998553 1011681 system_pods.go:89] "metrics-server-74d5856cc6-tx8jt" [790d860a-d27f-4535-94d8-64f40cb79071] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:19:20.998558 1011681 system_pods.go:89] "storage-provisioner" [aa5fac91-1606-4716-a04a-18e9d80c926b] Running
	I0116 03:19:20.998576 1011681 retry.go:31] will retry after 6.19070677s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0116 03:19:22.164973 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:19:24.165241 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:19:25.044790 1011460 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0116 03:19:25.044804 1011460 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0116 03:19:25.044820 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetSSHHostname
	I0116 03:19:25.042734 1011460 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:19:25.044907 1011460 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:19:25.045505 1011460 main.go:141] libmachine: (no-preload-934668) DBG | domain no-preload-934668 has defined MAC address 52:54:00:96:89:86 in network mk-no-preload-934668
	I0116 03:19:25.046226 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetSSHPort
	I0116 03:19:25.046284 1011460 main.go:141] libmachine: (no-preload-934668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:89:86", ip: ""} in network mk-no-preload-934668: {Iface:virbr2 ExpiryTime:2024-01-16 04:13:45 +0000 UTC Type:0 Mac:52:54:00:96:89:86 Iaid: IPaddr:192.168.50.29 Prefix:24 Hostname:no-preload-934668 Clientid:01:52:54:00:96:89:86}
	I0116 03:19:25.046404 1011460 main.go:141] libmachine: (no-preload-934668) DBG | domain no-preload-934668 has defined IP address 192.168.50.29 and MAC address 52:54:00:96:89:86 in network mk-no-preload-934668
	I0116 03:19:25.046434 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetSSHKeyPath
	I0116 03:19:25.046724 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetSSHUsername
	I0116 03:19:25.046878 1011460 sshutil.go:53] new ssh client: &{IP:192.168.50.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/no-preload-934668/id_rsa Username:docker}
	I0116 03:19:25.048780 1011460 main.go:141] libmachine: (no-preload-934668) DBG | domain no-preload-934668 has defined MAC address 52:54:00:96:89:86 in network mk-no-preload-934668
	I0116 03:19:25.049237 1011460 main.go:141] libmachine: (no-preload-934668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:89:86", ip: ""} in network mk-no-preload-934668: {Iface:virbr2 ExpiryTime:2024-01-16 04:13:45 +0000 UTC Type:0 Mac:52:54:00:96:89:86 Iaid: IPaddr:192.168.50.29 Prefix:24 Hostname:no-preload-934668 Clientid:01:52:54:00:96:89:86}
	I0116 03:19:25.049260 1011460 main.go:141] libmachine: (no-preload-934668) DBG | domain no-preload-934668 has defined IP address 192.168.50.29 and MAC address 52:54:00:96:89:86 in network mk-no-preload-934668
	I0116 03:19:25.049432 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetSSHPort
	I0116 03:19:25.049846 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetSSHKeyPath
	I0116 03:19:25.050200 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetSSHUsername
	I0116 03:19:25.050376 1011460 sshutil.go:53] new ssh client: &{IP:192.168.50.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/no-preload-934668/id_rsa Username:docker}
	I0116 03:19:25.062306 1011460 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40141
	I0116 03:19:25.062765 1011460 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:19:25.063248 1011460 main.go:141] libmachine: Using API Version  1
	I0116 03:19:25.063261 1011460 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:19:25.063609 1011460 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:19:25.063805 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetState
	I0116 03:19:25.065537 1011460 main.go:141] libmachine: (no-preload-934668) Calling .DriverName
	I0116 03:19:25.065785 1011460 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0116 03:19:25.065818 1011460 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0116 03:19:25.065841 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetSSHHostname
	I0116 03:19:25.068664 1011460 main.go:141] libmachine: (no-preload-934668) DBG | domain no-preload-934668 has defined MAC address 52:54:00:96:89:86 in network mk-no-preload-934668
	I0116 03:19:25.069102 1011460 main.go:141] libmachine: (no-preload-934668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:89:86", ip: ""} in network mk-no-preload-934668: {Iface:virbr2 ExpiryTime:2024-01-16 04:13:45 +0000 UTC Type:0 Mac:52:54:00:96:89:86 Iaid: IPaddr:192.168.50.29 Prefix:24 Hostname:no-preload-934668 Clientid:01:52:54:00:96:89:86}
	I0116 03:19:25.069125 1011460 main.go:141] libmachine: (no-preload-934668) DBG | domain no-preload-934668 has defined IP address 192.168.50.29 and MAC address 52:54:00:96:89:86 in network mk-no-preload-934668
	I0116 03:19:25.069273 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetSSHPort
	I0116 03:19:25.069454 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetSSHKeyPath
	I0116 03:19:25.069627 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetSSHUsername
	I0116 03:19:25.069763 1011460 sshutil.go:53] new ssh client: &{IP:192.168.50.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/no-preload-934668/id_rsa Username:docker}
	I0116 03:19:25.182658 1011460 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0116 03:19:25.209575 1011460 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 03:19:25.231221 1011460 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0116 03:19:25.231310 1011460 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0116 03:19:25.287263 1011460 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0116 03:19:25.337307 1011460 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0116 03:19:25.337350 1011460 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0116 03:19:25.433778 1011460 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0116 03:19:25.433821 1011460 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0116 03:19:25.507802 1011460 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0116 03:19:25.528239 1011460 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-934668" context rescaled to 1 replicas
	I0116 03:19:25.528282 1011460 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.29 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0116 03:19:25.530067 1011460 out.go:177] * Verifying Kubernetes components...
	I0116 03:19:25.532055 1011460 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 03:19:26.021224 1011460 start.go:929] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I0116 03:19:26.359779 1011460 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.072464523s)
	I0116 03:19:26.359844 1011460 main.go:141] libmachine: Making call to close driver server
	I0116 03:19:26.359859 1011460 main.go:141] libmachine: (no-preload-934668) Calling .Close
	I0116 03:19:26.359860 1011460 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.150243124s)
	I0116 03:19:26.359900 1011460 main.go:141] libmachine: Making call to close driver server
	I0116 03:19:26.359919 1011460 main.go:141] libmachine: (no-preload-934668) Calling .Close
	I0116 03:19:26.360228 1011460 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:19:26.360258 1011460 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:19:26.360269 1011460 main.go:141] libmachine: Making call to close driver server
	I0116 03:19:26.360278 1011460 main.go:141] libmachine: (no-preload-934668) Calling .Close
	I0116 03:19:26.360447 1011460 main.go:141] libmachine: (no-preload-934668) DBG | Closing plugin on server side
	I0116 03:19:26.360507 1011460 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:19:26.360546 1011460 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:19:26.360560 1011460 main.go:141] libmachine: (no-preload-934668) DBG | Closing plugin on server side
	I0116 03:19:26.361873 1011460 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:19:26.361895 1011460 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:19:26.361911 1011460 main.go:141] libmachine: Making call to close driver server
	I0116 03:19:26.361920 1011460 main.go:141] libmachine: (no-preload-934668) Calling .Close
	I0116 03:19:26.362297 1011460 main.go:141] libmachine: (no-preload-934668) DBG | Closing plugin on server side
	I0116 03:19:26.362339 1011460 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:19:26.362372 1011460 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:19:26.376371 1011460 main.go:141] libmachine: Making call to close driver server
	I0116 03:19:26.376405 1011460 main.go:141] libmachine: (no-preload-934668) Calling .Close
	I0116 03:19:26.376703 1011460 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:19:26.376722 1011460 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:19:26.607902 1011460 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.100046486s)
	I0116 03:19:26.607968 1011460 main.go:141] libmachine: Making call to close driver server
	I0116 03:19:26.607973 1011460 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.075879995s)
	I0116 03:19:26.608021 1011460 node_ready.go:35] waiting up to 6m0s for node "no-preload-934668" to be "Ready" ...
	I0116 03:19:26.607985 1011460 main.go:141] libmachine: (no-preload-934668) Calling .Close
	I0116 03:19:26.608450 1011460 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:19:26.608470 1011460 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:19:26.608483 1011460 main.go:141] libmachine: Making call to close driver server
	I0116 03:19:26.608493 1011460 main.go:141] libmachine: (no-preload-934668) Calling .Close
	I0116 03:19:26.608771 1011460 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:19:26.608791 1011460 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:19:26.608794 1011460 main.go:141] libmachine: (no-preload-934668) DBG | Closing plugin on server side
	I0116 03:19:26.608803 1011460 addons.go:470] Verifying addon metrics-server=true in "no-preload-934668"
	I0116 03:19:26.611385 1011460 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0116 03:19:26.612672 1011460 addons.go:505] enable addons completed in 1.620530835s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0116 03:19:26.611903 1011460 node_ready.go:49] node "no-preload-934668" has status "Ready":"True"
	I0116 03:19:26.612707 1011460 node_ready.go:38] duration metric: took 4.665246ms waiting for node "no-preload-934668" to be "Ready" ...
	I0116 03:19:26.612719 1011460 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:19:26.625443 1011460 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-64qzh" in "kube-system" namespace to be "Ready" ...
	I0116 03:19:27.195320 1011681 system_pods.go:86] 4 kube-system pods found
	I0116 03:19:27.195364 1011681 system_pods.go:89] "coredns-5644d7b6d9-qmzl6" [3e4b23ca-c18b-4158-b15b-df53326b384c] Running
	I0116 03:19:27.195375 1011681 system_pods.go:89] "kube-proxy-tv7gz" [a1bf5e59-b2ae-489c-8297-25ad7c456303] Running
	I0116 03:19:27.195388 1011681 system_pods.go:89] "metrics-server-74d5856cc6-tx8jt" [790d860a-d27f-4535-94d8-64f40cb79071] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:19:27.195396 1011681 system_pods.go:89] "storage-provisioner" [aa5fac91-1606-4716-a04a-18e9d80c926b] Running
	I0116 03:19:27.195423 1011681 retry.go:31] will retry after 6.009246504s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0116 03:19:26.166175 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:19:28.167332 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:19:27.632495 1011460 pod_ready.go:97] error getting pod "coredns-76f75df574-64qzh" in "kube-system" namespace (skipping!): pods "coredns-76f75df574-64qzh" not found
	I0116 03:19:27.632522 1011460 pod_ready.go:81] duration metric: took 1.007051516s waiting for pod "coredns-76f75df574-64qzh" in "kube-system" namespace to be "Ready" ...
	E0116 03:19:27.632534 1011460 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-76f75df574-64qzh" in "kube-system" namespace (skipping!): pods "coredns-76f75df574-64qzh" not found
	I0116 03:19:27.632541 1011460 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-k2kc7" in "kube-system" namespace to be "Ready" ...
	I0116 03:19:29.640682 1011460 pod_ready.go:92] pod "coredns-76f75df574-k2kc7" in "kube-system" namespace has status "Ready":"True"
	I0116 03:19:29.640718 1011460 pod_ready.go:81] duration metric: took 2.008169192s waiting for pod "coredns-76f75df574-k2kc7" in "kube-system" namespace to be "Ready" ...
	I0116 03:19:29.640736 1011460 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-934668" in "kube-system" namespace to be "Ready" ...
	I0116 03:19:29.646552 1011460 pod_ready.go:92] pod "etcd-no-preload-934668" in "kube-system" namespace has status "Ready":"True"
	I0116 03:19:29.646579 1011460 pod_ready.go:81] duration metric: took 5.835401ms waiting for pod "etcd-no-preload-934668" in "kube-system" namespace to be "Ready" ...
	I0116 03:19:29.646589 1011460 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-934668" in "kube-system" namespace to be "Ready" ...
	I0116 03:19:29.651970 1011460 pod_ready.go:92] pod "kube-apiserver-no-preload-934668" in "kube-system" namespace has status "Ready":"True"
	I0116 03:19:29.652004 1011460 pod_ready.go:81] duration metric: took 5.40828ms waiting for pod "kube-apiserver-no-preload-934668" in "kube-system" namespace to be "Ready" ...
	I0116 03:19:29.652018 1011460 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-934668" in "kube-system" namespace to be "Ready" ...
	I0116 03:19:29.658077 1011460 pod_ready.go:92] pod "kube-controller-manager-no-preload-934668" in "kube-system" namespace has status "Ready":"True"
	I0116 03:19:29.658104 1011460 pod_ready.go:81] duration metric: took 6.078615ms waiting for pod "kube-controller-manager-no-preload-934668" in "kube-system" namespace to be "Ready" ...
	I0116 03:19:29.658113 1011460 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fr424" in "kube-system" namespace to be "Ready" ...
	I0116 03:19:29.663585 1011460 pod_ready.go:92] pod "kube-proxy-fr424" in "kube-system" namespace has status "Ready":"True"
	I0116 03:19:29.663608 1011460 pod_ready.go:81] duration metric: took 5.488053ms waiting for pod "kube-proxy-fr424" in "kube-system" namespace to be "Ready" ...
	I0116 03:19:29.663617 1011460 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-934668" in "kube-system" namespace to be "Ready" ...
	I0116 03:19:30.037029 1011460 pod_ready.go:92] pod "kube-scheduler-no-preload-934668" in "kube-system" namespace has status "Ready":"True"
	I0116 03:19:30.037054 1011460 pod_ready.go:81] duration metric: took 373.431547ms waiting for pod "kube-scheduler-no-preload-934668" in "kube-system" namespace to be "Ready" ...
	I0116 03:19:30.037066 1011460 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace to be "Ready" ...
	I0116 03:19:32.045895 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:19:33.211194 1011681 system_pods.go:86] 5 kube-system pods found
	I0116 03:19:33.211224 1011681 system_pods.go:89] "coredns-5644d7b6d9-qmzl6" [3e4b23ca-c18b-4158-b15b-df53326b384c] Running
	I0116 03:19:33.211230 1011681 system_pods.go:89] "kube-proxy-tv7gz" [a1bf5e59-b2ae-489c-8297-25ad7c456303] Running
	I0116 03:19:33.211234 1011681 system_pods.go:89] "kube-scheduler-old-k8s-version-788237" [738a1d18-b8a8-429e-b335-a542be90f4db] Pending
	I0116 03:19:33.211240 1011681 system_pods.go:89] "metrics-server-74d5856cc6-tx8jt" [790d860a-d27f-4535-94d8-64f40cb79071] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:19:33.211245 1011681 system_pods.go:89] "storage-provisioner" [aa5fac91-1606-4716-a04a-18e9d80c926b] Running
	I0116 03:19:33.211264 1011681 retry.go:31] will retry after 6.865213703s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0116 03:19:30.664955 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:19:33.164999 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:19:35.168217 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:19:34.545787 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:19:37.045220 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:19:40.083281 1011681 system_pods.go:86] 5 kube-system pods found
	I0116 03:19:40.083312 1011681 system_pods.go:89] "coredns-5644d7b6d9-qmzl6" [3e4b23ca-c18b-4158-b15b-df53326b384c] Running
	I0116 03:19:40.083317 1011681 system_pods.go:89] "kube-proxy-tv7gz" [a1bf5e59-b2ae-489c-8297-25ad7c456303] Running
	I0116 03:19:40.083322 1011681 system_pods.go:89] "kube-scheduler-old-k8s-version-788237" [738a1d18-b8a8-429e-b335-a542be90f4db] Running
	I0116 03:19:40.083329 1011681 system_pods.go:89] "metrics-server-74d5856cc6-tx8jt" [790d860a-d27f-4535-94d8-64f40cb79071] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:19:40.083333 1011681 system_pods.go:89] "storage-provisioner" [aa5fac91-1606-4716-a04a-18e9d80c926b] Running
	I0116 03:19:40.083354 1011681 retry.go:31] will retry after 12.14535235s: missing components: etcd, kube-apiserver, kube-controller-manager
	I0116 03:19:37.664530 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:19:39.666312 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:19:39.544826 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:19:41.545124 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:19:42.167148 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:19:44.666332 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:19:44.046884 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:19:46.546221 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:19:47.165232 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:19:49.165989 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:19:49.045230 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:19:51.045508 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:19:52.235832 1011681 system_pods.go:86] 8 kube-system pods found
	I0116 03:19:52.235865 1011681 system_pods.go:89] "coredns-5644d7b6d9-qmzl6" [3e4b23ca-c18b-4158-b15b-df53326b384c] Running
	I0116 03:19:52.235870 1011681 system_pods.go:89] "etcd-old-k8s-version-788237" [d4e1632d-c3ce-47c0-a692-0d108bd3c46c] Running
	I0116 03:19:52.235874 1011681 system_pods.go:89] "kube-apiserver-old-k8s-version-788237" [6d662cac-b4ba-4b5a-a942-38056d2aab63] Running
	I0116 03:19:52.235878 1011681 system_pods.go:89] "kube-controller-manager-old-k8s-version-788237" [2ccd00ed-668e-40b6-b364-63e7a85d4fe9] Pending
	I0116 03:19:52.235882 1011681 system_pods.go:89] "kube-proxy-tv7gz" [a1bf5e59-b2ae-489c-8297-25ad7c456303] Running
	I0116 03:19:52.235887 1011681 system_pods.go:89] "kube-scheduler-old-k8s-version-788237" [738a1d18-b8a8-429e-b335-a542be90f4db] Running
	I0116 03:19:52.235892 1011681 system_pods.go:89] "metrics-server-74d5856cc6-tx8jt" [790d860a-d27f-4535-94d8-64f40cb79071] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:19:52.235897 1011681 system_pods.go:89] "storage-provisioner" [aa5fac91-1606-4716-a04a-18e9d80c926b] Running
	I0116 03:19:52.235916 1011681 retry.go:31] will retry after 13.113559392s: missing components: kube-controller-manager
	I0116 03:19:51.665249 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:19:53.667802 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:19:53.544777 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:19:55.545265 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:19:56.166884 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:19:58.167295 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:19:58.046171 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:00.545977 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:05.356292 1011681 system_pods.go:86] 8 kube-system pods found
	I0116 03:20:05.356332 1011681 system_pods.go:89] "coredns-5644d7b6d9-qmzl6" [3e4b23ca-c18b-4158-b15b-df53326b384c] Running
	I0116 03:20:05.356340 1011681 system_pods.go:89] "etcd-old-k8s-version-788237" [d4e1632d-c3ce-47c0-a692-0d108bd3c46c] Running
	I0116 03:20:05.356347 1011681 system_pods.go:89] "kube-apiserver-old-k8s-version-788237" [6d662cac-b4ba-4b5a-a942-38056d2aab63] Running
	I0116 03:20:05.356355 1011681 system_pods.go:89] "kube-controller-manager-old-k8s-version-788237" [2ccd00ed-668e-40b6-b364-63e7a85d4fe9] Running
	I0116 03:20:05.356361 1011681 system_pods.go:89] "kube-proxy-tv7gz" [a1bf5e59-b2ae-489c-8297-25ad7c456303] Running
	I0116 03:20:05.356367 1011681 system_pods.go:89] "kube-scheduler-old-k8s-version-788237" [738a1d18-b8a8-429e-b335-a542be90f4db] Running
	I0116 03:20:05.356379 1011681 system_pods.go:89] "metrics-server-74d5856cc6-tx8jt" [790d860a-d27f-4535-94d8-64f40cb79071] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:20:05.356392 1011681 system_pods.go:89] "storage-provisioner" [aa5fac91-1606-4716-a04a-18e9d80c926b] Running
	I0116 03:20:05.356405 1011681 system_pods.go:126] duration metric: took 1m4.991757131s to wait for k8s-apps to be running ...
	I0116 03:20:05.356417 1011681 system_svc.go:44] waiting for kubelet service to be running ....
	I0116 03:20:05.356484 1011681 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 03:20:05.373421 1011681 system_svc.go:56] duration metric: took 16.991793ms WaitForService to wait for kubelet.
	I0116 03:20:05.373453 1011681 kubeadm.go:581] duration metric: took 1m11.178099498s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0116 03:20:05.373474 1011681 node_conditions.go:102] verifying NodePressure condition ...
	I0116 03:20:05.377261 1011681 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 03:20:05.377289 1011681 node_conditions.go:123] node cpu capacity is 2
	I0116 03:20:05.377303 1011681 node_conditions.go:105] duration metric: took 3.824619ms to run NodePressure ...
	I0116 03:20:05.377315 1011681 start.go:228] waiting for startup goroutines ...
	I0116 03:20:05.377324 1011681 start.go:233] waiting for cluster config update ...
	I0116 03:20:05.377340 1011681 start.go:242] writing updated cluster config ...
	I0116 03:20:05.377691 1011681 ssh_runner.go:195] Run: rm -f paused
	I0116 03:20:05.433407 1011681 start.go:600] kubectl: 1.29.0, cluster: 1.16.0 (minor skew: 13)
	I0116 03:20:05.435544 1011681 out.go:177] 
	W0116 03:20:05.437104 1011681 out.go:239] ! /usr/local/bin/kubectl is version 1.29.0, which may have incompatibilities with Kubernetes 1.16.0.
	I0116 03:20:05.438355 1011681 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I0116 03:20:05.439604 1011681 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-788237" cluster and "default" namespace by default
	I0116 03:20:00.665894 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:03.166003 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:03.046349 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:05.047570 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:05.669899 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:08.165604 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:07.545964 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:10.045541 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:10.665401 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:12.666068 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:15.165456 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:12.545270 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:15.044498 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:17.044757 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:17.664970 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:20.170600 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:19.045718 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:21.545760 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:22.665734 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:24.666166 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:24.046926 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:26.545103 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:26.666505 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:29.166514 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:28.545929 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:31.048171 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:31.166637 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:33.665953 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:33.548606 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:35.561699 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:35.666414 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:38.165516 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:38.045658 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:40.544791 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:40.667352 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:43.165494 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:45.166150 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:42.545935 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:45.045849 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:47.667601 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:49.667904 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:47.546691 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:50.044945 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:52.046574 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:52.165607 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:54.666005 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:54.544893 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:57.048203 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:56.666062 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:58.666122 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:59.546941 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:01.547326 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:00.675116 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:03.165630 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:05.165989 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:04.045454 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:06.545774 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:07.665616 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:10.165283 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:09.045454 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:11.544234 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:12.166050 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:14.665663 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:13.546119 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:16.044940 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:16.666322 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:18.666577 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:18.545883 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:21.045761 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:21.165313 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:23.166487 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:23.543371 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:25.545045 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:25.666044 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:27.666372 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:30.166224 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:28.046020 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:30.545380 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:32.664709 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:34.665743 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:32.548394 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:35.044140 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:37.045266 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:36.666094 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:39.166598 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:39.544754 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:41.545120 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:41.665435 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:44.177500 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:44.046063 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:46.545258 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:46.665179 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:48.665479 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:49.045153 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:51.544430 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:50.665798 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:52.668246 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:55.164905 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:53.545067 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:55.548667 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:57.664986 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:00.166610 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:58.044255 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:00.046558 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:02.664972 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:04.665647 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:02.547522 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:05.045464 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:07.049814 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:07.165053 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:09.166438 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:09.545216 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:11.546990 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:11.166827 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:13.664900 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:13.547322 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:16.046930 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:15.667462 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:18.165667 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:20.167440 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:18.544902 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:20.545091 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:22.167972 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:24.665473 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:23.046783 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:25.546772 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:26.665601 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:28.667378 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:27.552093 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:30.045665 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:32.046723 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:31.166653 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:33.169992 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:34.546495 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:36.552400 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:35.667041 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:38.166719 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:39.045530 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:41.046225 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:40.664638 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:42.664974 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:45.167738 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:43.545469 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:46.045132 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:47.665457 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:50.165843 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:48.045266 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:50.544748 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:52.166892 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:54.170375 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:52.545596 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:54.546876 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:57.048120 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:56.664513 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:57.165325 1011955 pod_ready.go:81] duration metric: took 4m0.008324579s waiting for pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace to be "Ready" ...
	E0116 03:22:57.165356 1011955 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0116 03:22:57.165370 1011955 pod_ready.go:38] duration metric: took 4m1.181991459s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:22:57.165388 1011955 api_server.go:52] waiting for apiserver process to appear ...
	I0116 03:22:57.165528 1011955 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0116 03:22:57.165670 1011955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0116 03:22:57.223487 1011955 cri.go:89] found id: "94ed68f3d4f24313e6a1fce6e4b90b946fd2225aac302fced35d46a9deda6a24"
	I0116 03:22:57.223515 1011955 cri.go:89] found id: ""
	I0116 03:22:57.223523 1011955 logs.go:284] 1 containers: [94ed68f3d4f24313e6a1fce6e4b90b946fd2225aac302fced35d46a9deda6a24]
	I0116 03:22:57.223579 1011955 ssh_runner.go:195] Run: which crictl
	I0116 03:22:57.228506 1011955 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0116 03:22:57.228603 1011955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0116 03:22:57.275655 1011955 cri.go:89] found id: "c4fca60077d67415eed36ea7fec522e468538059f422909136109bb3049be2c8"
	I0116 03:22:57.275681 1011955 cri.go:89] found id: ""
	I0116 03:22:57.275689 1011955 logs.go:284] 1 containers: [c4fca60077d67415eed36ea7fec522e468538059f422909136109bb3049be2c8]
	I0116 03:22:57.275747 1011955 ssh_runner.go:195] Run: which crictl
	I0116 03:22:57.280168 1011955 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0116 03:22:57.280248 1011955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0116 03:22:57.325379 1011955 cri.go:89] found id: "8c87760cc0f44eb99f0cb2d610478aa2e69f84449726d201a30521127a679760"
	I0116 03:22:57.325403 1011955 cri.go:89] found id: ""
	I0116 03:22:57.325412 1011955 logs.go:284] 1 containers: [8c87760cc0f44eb99f0cb2d610478aa2e69f84449726d201a30521127a679760]
	I0116 03:22:57.325485 1011955 ssh_runner.go:195] Run: which crictl
	I0116 03:22:57.330376 1011955 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0116 03:22:57.330456 1011955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0116 03:22:57.374600 1011955 cri.go:89] found id: "19ca9f9fb82670ac732e41c9fb8af63d1fad24065000c55e0a0eb9ddae08738e"
	I0116 03:22:57.374633 1011955 cri.go:89] found id: ""
	I0116 03:22:57.374644 1011955 logs.go:284] 1 containers: [19ca9f9fb82670ac732e41c9fb8af63d1fad24065000c55e0a0eb9ddae08738e]
	I0116 03:22:57.374731 1011955 ssh_runner.go:195] Run: which crictl
	I0116 03:22:57.379908 1011955 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0116 03:22:57.379996 1011955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0116 03:22:57.422495 1011955 cri.go:89] found id: "cd75d2109b8821b8657f2dd6c4f2c4ca736d012c15668e44c34657f674fd675f"
	I0116 03:22:57.422524 1011955 cri.go:89] found id: ""
	I0116 03:22:57.422535 1011955 logs.go:284] 1 containers: [cd75d2109b8821b8657f2dd6c4f2c4ca736d012c15668e44c34657f674fd675f]
	I0116 03:22:57.422599 1011955 ssh_runner.go:195] Run: which crictl
	I0116 03:22:57.427327 1011955 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0116 03:22:57.427398 1011955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0116 03:22:57.472666 1011955 cri.go:89] found id: "7cde9c38c1e733d3e9e470f1180254a447d2686e0139615245efc3f8e8a06e45"
	I0116 03:22:57.472698 1011955 cri.go:89] found id: ""
	I0116 03:22:57.472715 1011955 logs.go:284] 1 containers: [7cde9c38c1e733d3e9e470f1180254a447d2686e0139615245efc3f8e8a06e45]
	I0116 03:22:57.472773 1011955 ssh_runner.go:195] Run: which crictl
	I0116 03:22:57.477425 1011955 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0116 03:22:57.477487 1011955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0116 03:22:57.519963 1011955 cri.go:89] found id: ""
	I0116 03:22:57.519998 1011955 logs.go:284] 0 containers: []
	W0116 03:22:57.520008 1011955 logs.go:286] No container was found matching "kindnet"
	I0116 03:22:57.520018 1011955 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0116 03:22:57.520082 1011955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0116 03:22:57.563323 1011955 cri.go:89] found id: "f2b31947cd9ab2c8bd5934451dd476eedb4489db897be097f0a19bf23818e9da"
	I0116 03:22:57.563351 1011955 cri.go:89] found id: ""
	I0116 03:22:57.563361 1011955 logs.go:284] 1 containers: [f2b31947cd9ab2c8bd5934451dd476eedb4489db897be097f0a19bf23818e9da]
	I0116 03:22:57.563429 1011955 ssh_runner.go:195] Run: which crictl
	I0116 03:22:57.567849 1011955 logs.go:123] Gathering logs for kube-controller-manager [7cde9c38c1e733d3e9e470f1180254a447d2686e0139615245efc3f8e8a06e45] ...
	I0116 03:22:57.567885 1011955 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7cde9c38c1e733d3e9e470f1180254a447d2686e0139615245efc3f8e8a06e45"
	I0116 03:22:57.630746 1011955 logs.go:123] Gathering logs for storage-provisioner [f2b31947cd9ab2c8bd5934451dd476eedb4489db897be097f0a19bf23818e9da] ...
	I0116 03:22:57.630790 1011955 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f2b31947cd9ab2c8bd5934451dd476eedb4489db897be097f0a19bf23818e9da"
	I0116 03:22:57.685136 1011955 logs.go:123] Gathering logs for container status ...
	I0116 03:22:57.685175 1011955 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0116 03:22:57.744223 1011955 logs.go:123] Gathering logs for dmesg ...
	I0116 03:22:57.744253 1011955 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0116 03:22:57.758357 1011955 logs.go:123] Gathering logs for describe nodes ...
	I0116 03:22:57.758386 1011955 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0116 03:22:57.921587 1011955 logs.go:123] Gathering logs for kube-apiserver [94ed68f3d4f24313e6a1fce6e4b90b946fd2225aac302fced35d46a9deda6a24] ...
	I0116 03:22:57.921631 1011955 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 94ed68f3d4f24313e6a1fce6e4b90b946fd2225aac302fced35d46a9deda6a24"
	I0116 03:22:57.981922 1011955 logs.go:123] Gathering logs for etcd [c4fca60077d67415eed36ea7fec522e468538059f422909136109bb3049be2c8] ...
	I0116 03:22:57.981959 1011955 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c4fca60077d67415eed36ea7fec522e468538059f422909136109bb3049be2c8"
	I0116 03:22:58.036701 1011955 logs.go:123] Gathering logs for kube-proxy [cd75d2109b8821b8657f2dd6c4f2c4ca736d012c15668e44c34657f674fd675f] ...
	I0116 03:22:58.036735 1011955 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cd75d2109b8821b8657f2dd6c4f2c4ca736d012c15668e44c34657f674fd675f"
	I0116 03:22:58.078332 1011955 logs.go:123] Gathering logs for kubelet ...
	I0116 03:22:58.078366 1011955 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0116 03:22:58.163271 1011955 logs.go:138] Found kubelet problem: Jan 16 03:18:52 default-k8s-diff-port-775571 kubelet[3872]: W0116 03:18:52.493348    3872 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-775571" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-775571' and this object
	W0116 03:22:58.163463 1011955 logs.go:138] Found kubelet problem: Jan 16 03:18:52 default-k8s-diff-port-775571 kubelet[3872]: E0116 03:18:52.493417    3872 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-775571" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-775571' and this object
	I0116 03:22:58.186700 1011955 logs.go:123] Gathering logs for coredns [8c87760cc0f44eb99f0cb2d610478aa2e69f84449726d201a30521127a679760] ...
	I0116 03:22:58.186740 1011955 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8c87760cc0f44eb99f0cb2d610478aa2e69f84449726d201a30521127a679760"
	I0116 03:22:58.230943 1011955 logs.go:123] Gathering logs for kube-scheduler [19ca9f9fb82670ac732e41c9fb8af63d1fad24065000c55e0a0eb9ddae08738e] ...
	I0116 03:22:58.230987 1011955 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 19ca9f9fb82670ac732e41c9fb8af63d1fad24065000c55e0a0eb9ddae08738e"
	I0116 03:22:58.284787 1011955 logs.go:123] Gathering logs for CRI-O ...
	I0116 03:22:58.284826 1011955 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0116 03:22:58.711979 1011955 out.go:309] Setting ErrFile to fd 2...
	I0116 03:22:58.712020 1011955 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0116 03:22:58.712201 1011955 out.go:239] X Problems detected in kubelet:
	W0116 03:22:58.712218 1011955 out.go:239]   Jan 16 03:18:52 default-k8s-diff-port-775571 kubelet[3872]: W0116 03:18:52.493348    3872 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-775571" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-775571' and this object
	W0116 03:22:58.712232 1011955 out.go:239]   Jan 16 03:18:52 default-k8s-diff-port-775571 kubelet[3872]: E0116 03:18:52.493417    3872 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-775571" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-775571' and this object
	I0116 03:22:58.712247 1011955 out.go:309] Setting ErrFile to fd 2...
	I0116 03:22:58.712259 1011955 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 03:22:59.550035 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:23:02.045996 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:23:04.049349 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:23:06.545441 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:23:08.713432 1011955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:23:08.730913 1011955 api_server.go:72] duration metric: took 4m15.560433909s to wait for apiserver process to appear ...
	I0116 03:23:08.730953 1011955 api_server.go:88] waiting for apiserver healthz status ...
	I0116 03:23:08.731009 1011955 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0116 03:23:08.731083 1011955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0116 03:23:08.781386 1011955 cri.go:89] found id: "94ed68f3d4f24313e6a1fce6e4b90b946fd2225aac302fced35d46a9deda6a24"
	I0116 03:23:08.781415 1011955 cri.go:89] found id: ""
	I0116 03:23:08.781425 1011955 logs.go:284] 1 containers: [94ed68f3d4f24313e6a1fce6e4b90b946fd2225aac302fced35d46a9deda6a24]
	I0116 03:23:08.781487 1011955 ssh_runner.go:195] Run: which crictl
	I0116 03:23:08.787261 1011955 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0116 03:23:08.787341 1011955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0116 03:23:08.840893 1011955 cri.go:89] found id: "c4fca60077d67415eed36ea7fec522e468538059f422909136109bb3049be2c8"
	I0116 03:23:08.840929 1011955 cri.go:89] found id: ""
	I0116 03:23:08.840940 1011955 logs.go:284] 1 containers: [c4fca60077d67415eed36ea7fec522e468538059f422909136109bb3049be2c8]
	I0116 03:23:08.840996 1011955 ssh_runner.go:195] Run: which crictl
	I0116 03:23:08.846278 1011955 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0116 03:23:08.846350 1011955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0116 03:23:08.894119 1011955 cri.go:89] found id: "8c87760cc0f44eb99f0cb2d610478aa2e69f84449726d201a30521127a679760"
	I0116 03:23:08.894141 1011955 cri.go:89] found id: ""
	I0116 03:23:08.894149 1011955 logs.go:284] 1 containers: [8c87760cc0f44eb99f0cb2d610478aa2e69f84449726d201a30521127a679760]
	I0116 03:23:08.894204 1011955 ssh_runner.go:195] Run: which crictl
	I0116 03:23:08.899019 1011955 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0116 03:23:08.899088 1011955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0116 03:23:08.944579 1011955 cri.go:89] found id: "19ca9f9fb82670ac732e41c9fb8af63d1fad24065000c55e0a0eb9ddae08738e"
	I0116 03:23:08.944607 1011955 cri.go:89] found id: ""
	I0116 03:23:08.944616 1011955 logs.go:284] 1 containers: [19ca9f9fb82670ac732e41c9fb8af63d1fad24065000c55e0a0eb9ddae08738e]
	I0116 03:23:08.944689 1011955 ssh_runner.go:195] Run: which crictl
	I0116 03:23:08.948828 1011955 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0116 03:23:08.948907 1011955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0116 03:23:08.997870 1011955 cri.go:89] found id: "cd75d2109b8821b8657f2dd6c4f2c4ca736d012c15668e44c34657f674fd675f"
	I0116 03:23:08.997904 1011955 cri.go:89] found id: ""
	I0116 03:23:08.997916 1011955 logs.go:284] 1 containers: [cd75d2109b8821b8657f2dd6c4f2c4ca736d012c15668e44c34657f674fd675f]
	I0116 03:23:08.997987 1011955 ssh_runner.go:195] Run: which crictl
	I0116 03:23:09.002335 1011955 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0116 03:23:09.002420 1011955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0116 03:23:09.042381 1011955 cri.go:89] found id: "7cde9c38c1e733d3e9e470f1180254a447d2686e0139615245efc3f8e8a06e45"
	I0116 03:23:09.042408 1011955 cri.go:89] found id: ""
	I0116 03:23:09.042417 1011955 logs.go:284] 1 containers: [7cde9c38c1e733d3e9e470f1180254a447d2686e0139615245efc3f8e8a06e45]
	I0116 03:23:09.042481 1011955 ssh_runner.go:195] Run: which crictl
	I0116 03:23:09.047097 1011955 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0116 03:23:09.047180 1011955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0116 03:23:09.093592 1011955 cri.go:89] found id: ""
	I0116 03:23:09.093628 1011955 logs.go:284] 0 containers: []
	W0116 03:23:09.093639 1011955 logs.go:286] No container was found matching "kindnet"
	I0116 03:23:09.093648 1011955 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0116 03:23:09.093730 1011955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0116 03:23:09.142839 1011955 cri.go:89] found id: "f2b31947cd9ab2c8bd5934451dd476eedb4489db897be097f0a19bf23818e9da"
	I0116 03:23:09.142868 1011955 cri.go:89] found id: ""
	I0116 03:23:09.142878 1011955 logs.go:284] 1 containers: [f2b31947cd9ab2c8bd5934451dd476eedb4489db897be097f0a19bf23818e9da]
	I0116 03:23:09.142950 1011955 ssh_runner.go:195] Run: which crictl
	I0116 03:23:09.146997 1011955 logs.go:123] Gathering logs for CRI-O ...
	I0116 03:23:09.147032 1011955 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0116 03:23:09.550608 1011955 logs.go:123] Gathering logs for kubelet ...
	I0116 03:23:09.550654 1011955 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0116 03:23:09.637527 1011955 logs.go:138] Found kubelet problem: Jan 16 03:18:52 default-k8s-diff-port-775571 kubelet[3872]: W0116 03:18:52.493348    3872 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-775571" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-775571' and this object
	W0116 03:23:09.637714 1011955 logs.go:138] Found kubelet problem: Jan 16 03:18:52 default-k8s-diff-port-775571 kubelet[3872]: E0116 03:18:52.493417    3872 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-775571" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-775571' and this object
	I0116 03:23:09.660631 1011955 logs.go:123] Gathering logs for etcd [c4fca60077d67415eed36ea7fec522e468538059f422909136109bb3049be2c8] ...
	I0116 03:23:09.660676 1011955 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c4fca60077d67415eed36ea7fec522e468538059f422909136109bb3049be2c8"
	I0116 03:23:09.715818 1011955 logs.go:123] Gathering logs for kube-scheduler [19ca9f9fb82670ac732e41c9fb8af63d1fad24065000c55e0a0eb9ddae08738e] ...
	I0116 03:23:09.715860 1011955 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 19ca9f9fb82670ac732e41c9fb8af63d1fad24065000c55e0a0eb9ddae08738e"
	I0116 03:23:09.770445 1011955 logs.go:123] Gathering logs for coredns [8c87760cc0f44eb99f0cb2d610478aa2e69f84449726d201a30521127a679760] ...
	I0116 03:23:09.770487 1011955 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8c87760cc0f44eb99f0cb2d610478aa2e69f84449726d201a30521127a679760"
	I0116 03:23:09.817598 1011955 logs.go:123] Gathering logs for kube-proxy [cd75d2109b8821b8657f2dd6c4f2c4ca736d012c15668e44c34657f674fd675f] ...
	I0116 03:23:09.817640 1011955 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cd75d2109b8821b8657f2dd6c4f2c4ca736d012c15668e44c34657f674fd675f"
	I0116 03:23:09.866233 1011955 logs.go:123] Gathering logs for kube-controller-manager [7cde9c38c1e733d3e9e470f1180254a447d2686e0139615245efc3f8e8a06e45] ...
	I0116 03:23:09.866276 1011955 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7cde9c38c1e733d3e9e470f1180254a447d2686e0139615245efc3f8e8a06e45"
	I0116 03:23:09.929526 1011955 logs.go:123] Gathering logs for storage-provisioner [f2b31947cd9ab2c8bd5934451dd476eedb4489db897be097f0a19bf23818e9da] ...
	I0116 03:23:09.929564 1011955 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f2b31947cd9ab2c8bd5934451dd476eedb4489db897be097f0a19bf23818e9da"
	I0116 03:23:09.971573 1011955 logs.go:123] Gathering logs for container status ...
	I0116 03:23:09.971603 1011955 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0116 03:23:10.023976 1011955 logs.go:123] Gathering logs for dmesg ...
	I0116 03:23:10.024008 1011955 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0116 03:23:10.042100 1011955 logs.go:123] Gathering logs for describe nodes ...
	I0116 03:23:10.042140 1011955 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0116 03:23:10.197828 1011955 logs.go:123] Gathering logs for kube-apiserver [94ed68f3d4f24313e6a1fce6e4b90b946fd2225aac302fced35d46a9deda6a24] ...
	I0116 03:23:10.197867 1011955 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 94ed68f3d4f24313e6a1fce6e4b90b946fd2225aac302fced35d46a9deda6a24"
	I0116 03:23:10.248743 1011955 out.go:309] Setting ErrFile to fd 2...
	I0116 03:23:10.248783 1011955 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0116 03:23:10.248869 1011955 out.go:239] X Problems detected in kubelet:
	W0116 03:23:10.248882 1011955 out.go:239]   Jan 16 03:18:52 default-k8s-diff-port-775571 kubelet[3872]: W0116 03:18:52.493348    3872 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-775571" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-775571' and this object
	W0116 03:23:10.248900 1011955 out.go:239]   Jan 16 03:18:52 default-k8s-diff-port-775571 kubelet[3872]: E0116 03:18:52.493417    3872 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-775571" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-775571' and this object
	I0116 03:23:10.248913 1011955 out.go:309] Setting ErrFile to fd 2...
	I0116 03:23:10.248919 1011955 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 03:23:08.545744 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:23:11.045197 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:23:13.047444 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:23:15.544949 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:23:20.249250 1011955 api_server.go:253] Checking apiserver healthz at https://192.168.72.158:8444/healthz ...
	I0116 03:23:20.255958 1011955 api_server.go:279] https://192.168.72.158:8444/healthz returned 200:
	ok
	I0116 03:23:20.257425 1011955 api_server.go:141] control plane version: v1.28.4
	I0116 03:23:20.257457 1011955 api_server.go:131] duration metric: took 11.526494801s to wait for apiserver health ...
	I0116 03:23:20.257467 1011955 system_pods.go:43] waiting for kube-system pods to appear ...
	I0116 03:23:20.257504 1011955 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0116 03:23:20.257572 1011955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0116 03:23:20.304303 1011955 cri.go:89] found id: "94ed68f3d4f24313e6a1fce6e4b90b946fd2225aac302fced35d46a9deda6a24"
	I0116 03:23:20.304331 1011955 cri.go:89] found id: ""
	I0116 03:23:20.304342 1011955 logs.go:284] 1 containers: [94ed68f3d4f24313e6a1fce6e4b90b946fd2225aac302fced35d46a9deda6a24]
	I0116 03:23:20.304410 1011955 ssh_runner.go:195] Run: which crictl
	I0116 03:23:20.309509 1011955 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0116 03:23:20.309599 1011955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0116 03:23:20.353692 1011955 cri.go:89] found id: "c4fca60077d67415eed36ea7fec522e468538059f422909136109bb3049be2c8"
	I0116 03:23:20.353721 1011955 cri.go:89] found id: ""
	I0116 03:23:20.353731 1011955 logs.go:284] 1 containers: [c4fca60077d67415eed36ea7fec522e468538059f422909136109bb3049be2c8]
	I0116 03:23:20.353816 1011955 ssh_runner.go:195] Run: which crictl
	I0116 03:23:20.358894 1011955 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0116 03:23:20.358978 1011955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0116 03:23:20.409337 1011955 cri.go:89] found id: "8c87760cc0f44eb99f0cb2d610478aa2e69f84449726d201a30521127a679760"
	I0116 03:23:20.409364 1011955 cri.go:89] found id: ""
	I0116 03:23:20.409388 1011955 logs.go:284] 1 containers: [8c87760cc0f44eb99f0cb2d610478aa2e69f84449726d201a30521127a679760]
	I0116 03:23:20.409462 1011955 ssh_runner.go:195] Run: which crictl
	I0116 03:23:20.414337 1011955 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0116 03:23:20.414422 1011955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0116 03:23:20.458585 1011955 cri.go:89] found id: "19ca9f9fb82670ac732e41c9fb8af63d1fad24065000c55e0a0eb9ddae08738e"
	I0116 03:23:20.458613 1011955 cri.go:89] found id: ""
	I0116 03:23:20.458621 1011955 logs.go:284] 1 containers: [19ca9f9fb82670ac732e41c9fb8af63d1fad24065000c55e0a0eb9ddae08738e]
	I0116 03:23:20.458688 1011955 ssh_runner.go:195] Run: which crictl
	I0116 03:23:20.463813 1011955 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0116 03:23:20.463899 1011955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0116 03:23:20.514696 1011955 cri.go:89] found id: "cd75d2109b8821b8657f2dd6c4f2c4ca736d012c15668e44c34657f674fd675f"
	I0116 03:23:20.514729 1011955 cri.go:89] found id: ""
	I0116 03:23:20.514740 1011955 logs.go:284] 1 containers: [cd75d2109b8821b8657f2dd6c4f2c4ca736d012c15668e44c34657f674fd675f]
	I0116 03:23:20.514813 1011955 ssh_runner.go:195] Run: which crictl
	I0116 03:23:20.520195 1011955 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0116 03:23:20.520289 1011955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0116 03:23:17.546020 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:23:19.546663 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:23:22.046331 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:23:20.563280 1011955 cri.go:89] found id: "7cde9c38c1e733d3e9e470f1180254a447d2686e0139615245efc3f8e8a06e45"
	I0116 03:23:20.563313 1011955 cri.go:89] found id: ""
	I0116 03:23:20.563325 1011955 logs.go:284] 1 containers: [7cde9c38c1e733d3e9e470f1180254a447d2686e0139615245efc3f8e8a06e45]
	I0116 03:23:20.563392 1011955 ssh_runner.go:195] Run: which crictl
	I0116 03:23:20.572063 1011955 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0116 03:23:20.572143 1011955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0116 03:23:20.610050 1011955 cri.go:89] found id: ""
	I0116 03:23:20.610078 1011955 logs.go:284] 0 containers: []
	W0116 03:23:20.610087 1011955 logs.go:286] No container was found matching "kindnet"
	I0116 03:23:20.610093 1011955 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0116 03:23:20.610149 1011955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0116 03:23:20.651475 1011955 cri.go:89] found id: "f2b31947cd9ab2c8bd5934451dd476eedb4489db897be097f0a19bf23818e9da"
	I0116 03:23:20.651499 1011955 cri.go:89] found id: ""
	I0116 03:23:20.651509 1011955 logs.go:284] 1 containers: [f2b31947cd9ab2c8bd5934451dd476eedb4489db897be097f0a19bf23818e9da]
	I0116 03:23:20.651575 1011955 ssh_runner.go:195] Run: which crictl
	I0116 03:23:20.656379 1011955 logs.go:123] Gathering logs for etcd [c4fca60077d67415eed36ea7fec522e468538059f422909136109bb3049be2c8] ...
	I0116 03:23:20.656405 1011955 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c4fca60077d67415eed36ea7fec522e468538059f422909136109bb3049be2c8"
	I0116 03:23:20.706726 1011955 logs.go:123] Gathering logs for kube-scheduler [19ca9f9fb82670ac732e41c9fb8af63d1fad24065000c55e0a0eb9ddae08738e] ...
	I0116 03:23:20.706762 1011955 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 19ca9f9fb82670ac732e41c9fb8af63d1fad24065000c55e0a0eb9ddae08738e"
	I0116 03:23:20.755434 1011955 logs.go:123] Gathering logs for storage-provisioner [f2b31947cd9ab2c8bd5934451dd476eedb4489db897be097f0a19bf23818e9da] ...
	I0116 03:23:20.755472 1011955 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f2b31947cd9ab2c8bd5934451dd476eedb4489db897be097f0a19bf23818e9da"
	I0116 03:23:20.796611 1011955 logs.go:123] Gathering logs for kubelet ...
	I0116 03:23:20.796649 1011955 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0116 03:23:20.888886 1011955 logs.go:138] Found kubelet problem: Jan 16 03:18:52 default-k8s-diff-port-775571 kubelet[3872]: W0116 03:18:52.493348    3872 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-775571" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-775571' and this object
	W0116 03:23:20.889106 1011955 logs.go:138] Found kubelet problem: Jan 16 03:18:52 default-k8s-diff-port-775571 kubelet[3872]: E0116 03:18:52.493417    3872 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-775571" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-775571' and this object
	I0116 03:23:20.915624 1011955 logs.go:123] Gathering logs for describe nodes ...
	I0116 03:23:20.915668 1011955 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0116 03:23:21.069499 1011955 logs.go:123] Gathering logs for kube-apiserver [94ed68f3d4f24313e6a1fce6e4b90b946fd2225aac302fced35d46a9deda6a24] ...
	I0116 03:23:21.069544 1011955 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 94ed68f3d4f24313e6a1fce6e4b90b946fd2225aac302fced35d46a9deda6a24"
	I0116 03:23:21.128642 1011955 logs.go:123] Gathering logs for kube-controller-manager [7cde9c38c1e733d3e9e470f1180254a447d2686e0139615245efc3f8e8a06e45] ...
	I0116 03:23:21.128686 1011955 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7cde9c38c1e733d3e9e470f1180254a447d2686e0139615245efc3f8e8a06e45"
	I0116 03:23:21.186151 1011955 logs.go:123] Gathering logs for CRI-O ...
	I0116 03:23:21.186204 1011955 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0116 03:23:21.586722 1011955 logs.go:123] Gathering logs for container status ...
	I0116 03:23:21.586769 1011955 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0116 03:23:21.642253 1011955 logs.go:123] Gathering logs for dmesg ...
	I0116 03:23:21.642301 1011955 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0116 03:23:21.658076 1011955 logs.go:123] Gathering logs for coredns [8c87760cc0f44eb99f0cb2d610478aa2e69f84449726d201a30521127a679760] ...
	I0116 03:23:21.658108 1011955 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8c87760cc0f44eb99f0cb2d610478aa2e69f84449726d201a30521127a679760"
	I0116 03:23:21.712191 1011955 logs.go:123] Gathering logs for kube-proxy [cd75d2109b8821b8657f2dd6c4f2c4ca736d012c15668e44c34657f674fd675f] ...
	I0116 03:23:21.712229 1011955 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cd75d2109b8821b8657f2dd6c4f2c4ca736d012c15668e44c34657f674fd675f"
	I0116 03:23:21.763632 1011955 out.go:309] Setting ErrFile to fd 2...
	I0116 03:23:21.763672 1011955 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0116 03:23:21.763767 1011955 out.go:239] X Problems detected in kubelet:
	W0116 03:23:21.763792 1011955 out.go:239]   Jan 16 03:18:52 default-k8s-diff-port-775571 kubelet[3872]: W0116 03:18:52.493348    3872 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-775571" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-775571' and this object
	W0116 03:23:21.763804 1011955 out.go:239]   Jan 16 03:18:52 default-k8s-diff-port-775571 kubelet[3872]: E0116 03:18:52.493417    3872 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-775571" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-775571' and this object
	I0116 03:23:21.763816 1011955 out.go:309] Setting ErrFile to fd 2...
	I0116 03:23:21.763826 1011955 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 03:23:24.046962 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:23:26.544587 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:23:31.774617 1011955 system_pods.go:59] 8 kube-system pods found
	I0116 03:23:31.774653 1011955 system_pods.go:61] "coredns-5dd5756b68-mk795" [b928a6ae-07af-4bc4-a0c5-b3027730459c] Running
	I0116 03:23:31.774660 1011955 system_pods.go:61] "etcd-default-k8s-diff-port-775571" [1ec6d1b7-1c79-436f-bc2c-7f25d7b35d40] Running
	I0116 03:23:31.774664 1011955 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-775571" [0085c55b-c122-41dc-ab1b-e1110606563d] Running
	I0116 03:23:31.774670 1011955 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-775571" [57f644e6-74c4-4de5-a725-5dc2e049a78a] Running
	I0116 03:23:31.774677 1011955 system_pods.go:61] "kube-proxy-zw495" [d2f3b2b8-ba3f-49d0-b12b-9e78c5867c09] Running
	I0116 03:23:31.774683 1011955 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-775571" [8b024142-545b-46c1-babc-f0a544d2debc] Running
	I0116 03:23:31.774694 1011955 system_pods.go:61] "metrics-server-57f55c9bc5-928d7" [d3671063-27a1-4ad8-9f5f-b3e01205f483] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:23:31.774709 1011955 system_pods.go:61] "storage-provisioner" [8c309131-3f2c-411d-9876-05424a2c3b79] Running
	I0116 03:23:31.774720 1011955 system_pods.go:74] duration metric: took 11.517244217s to wait for pod list to return data ...
	I0116 03:23:31.774733 1011955 default_sa.go:34] waiting for default service account to be created ...
	I0116 03:23:31.777691 1011955 default_sa.go:45] found service account: "default"
	I0116 03:23:31.777717 1011955 default_sa.go:55] duration metric: took 2.971824ms for default service account to be created ...
	I0116 03:23:31.777725 1011955 system_pods.go:116] waiting for k8s-apps to be running ...
	I0116 03:23:31.784992 1011955 system_pods.go:86] 8 kube-system pods found
	I0116 03:23:31.785020 1011955 system_pods.go:89] "coredns-5dd5756b68-mk795" [b928a6ae-07af-4bc4-a0c5-b3027730459c] Running
	I0116 03:23:31.785027 1011955 system_pods.go:89] "etcd-default-k8s-diff-port-775571" [1ec6d1b7-1c79-436f-bc2c-7f25d7b35d40] Running
	I0116 03:23:31.785032 1011955 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-775571" [0085c55b-c122-41dc-ab1b-e1110606563d] Running
	I0116 03:23:31.785036 1011955 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-775571" [57f644e6-74c4-4de5-a725-5dc2e049a78a] Running
	I0116 03:23:31.785041 1011955 system_pods.go:89] "kube-proxy-zw495" [d2f3b2b8-ba3f-49d0-b12b-9e78c5867c09] Running
	I0116 03:23:31.785045 1011955 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-775571" [8b024142-545b-46c1-babc-f0a544d2debc] Running
	I0116 03:23:31.785053 1011955 system_pods.go:89] "metrics-server-57f55c9bc5-928d7" [d3671063-27a1-4ad8-9f5f-b3e01205f483] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:23:31.785058 1011955 system_pods.go:89] "storage-provisioner" [8c309131-3f2c-411d-9876-05424a2c3b79] Running
	I0116 03:23:31.785066 1011955 system_pods.go:126] duration metric: took 7.335258ms to wait for k8s-apps to be running ...
	I0116 03:23:31.785075 1011955 system_svc.go:44] waiting for kubelet service to be running ....
	I0116 03:23:31.785125 1011955 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 03:23:31.801767 1011955 system_svc.go:56] duration metric: took 16.666559ms WaitForService to wait for kubelet.
	I0116 03:23:31.801797 1011955 kubeadm.go:581] duration metric: took 4m38.631327454s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0116 03:23:31.801841 1011955 node_conditions.go:102] verifying NodePressure condition ...
	I0116 03:23:31.805655 1011955 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 03:23:31.805721 1011955 node_conditions.go:123] node cpu capacity is 2
	I0116 03:23:31.805773 1011955 node_conditions.go:105] duration metric: took 3.924567ms to run NodePressure ...
	I0116 03:23:31.805791 1011955 start.go:228] waiting for startup goroutines ...
	I0116 03:23:31.805822 1011955 start.go:233] waiting for cluster config update ...
	I0116 03:23:31.805842 1011955 start.go:242] writing updated cluster config ...
	I0116 03:23:31.806160 1011955 ssh_runner.go:195] Run: rm -f paused
	I0116 03:23:31.863603 1011955 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I0116 03:23:31.865992 1011955 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-775571" cluster and "default" namespace by default
	I0116 03:23:28.545733 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:23:30.051002 1011460 pod_ready.go:81] duration metric: took 4m0.013925231s waiting for pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace to be "Ready" ...
	E0116 03:23:30.051029 1011460 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0116 03:23:30.051040 1011460 pod_ready.go:38] duration metric: took 4m3.438310266s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:23:30.051073 1011460 api_server.go:52] waiting for apiserver process to appear ...
	I0116 03:23:30.051111 1011460 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0116 03:23:30.051173 1011460 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0116 03:23:30.118195 1011460 cri.go:89] found id: "f2403bf8a85e789f4307c4ad0dc429903a20a496b9a9e4ee152f4fb45edeaf5e"
	I0116 03:23:30.118230 1011460 cri.go:89] found id: ""
	I0116 03:23:30.118241 1011460 logs.go:284] 1 containers: [f2403bf8a85e789f4307c4ad0dc429903a20a496b9a9e4ee152f4fb45edeaf5e]
	I0116 03:23:30.118325 1011460 ssh_runner.go:195] Run: which crictl
	I0116 03:23:30.124760 1011460 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0116 03:23:30.124844 1011460 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0116 03:23:30.193482 1011460 cri.go:89] found id: "2abc1d37662ffc8c141014651481ef2b375ac1c303eff20d0fcb56cd604a2f8f"
	I0116 03:23:30.193512 1011460 cri.go:89] found id: ""
	I0116 03:23:30.193522 1011460 logs.go:284] 1 containers: [2abc1d37662ffc8c141014651481ef2b375ac1c303eff20d0fcb56cd604a2f8f]
	I0116 03:23:30.193586 1011460 ssh_runner.go:195] Run: which crictl
	I0116 03:23:30.201066 1011460 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0116 03:23:30.201155 1011460 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0116 03:23:30.265943 1011460 cri.go:89] found id: "229310b5851cfb28499024703302e9d76a32a5a7dd165d38163bd4a7e4457058"
	I0116 03:23:30.265979 1011460 cri.go:89] found id: ""
	I0116 03:23:30.265991 1011460 logs.go:284] 1 containers: [229310b5851cfb28499024703302e9d76a32a5a7dd165d38163bd4a7e4457058]
	I0116 03:23:30.266071 1011460 ssh_runner.go:195] Run: which crictl
	I0116 03:23:30.271404 1011460 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0116 03:23:30.271498 1011460 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0116 03:23:30.315307 1011460 cri.go:89] found id: "63e8de06e9ec3ccb1ecfe2142fb84f65efabdec07da804a74ec55b57f677b021"
	I0116 03:23:30.315336 1011460 cri.go:89] found id: ""
	I0116 03:23:30.315346 1011460 logs.go:284] 1 containers: [63e8de06e9ec3ccb1ecfe2142fb84f65efabdec07da804a74ec55b57f677b021]
	I0116 03:23:30.315422 1011460 ssh_runner.go:195] Run: which crictl
	I0116 03:23:30.321045 1011460 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0116 03:23:30.321118 1011460 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0116 03:23:30.370734 1011460 cri.go:89] found id: "153d0b659aaa8533b9004f341817ab607005ff6cb1680ef017005a00267ffda8"
	I0116 03:23:30.370760 1011460 cri.go:89] found id: ""
	I0116 03:23:30.370770 1011460 logs.go:284] 1 containers: [153d0b659aaa8533b9004f341817ab607005ff6cb1680ef017005a00267ffda8]
	I0116 03:23:30.370821 1011460 ssh_runner.go:195] Run: which crictl
	I0116 03:23:30.375705 1011460 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0116 03:23:30.375785 1011460 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0116 03:23:30.415457 1011460 cri.go:89] found id: "997be6a446a806235150d69a5095c196d7a47ada007ba33985478f573234106d"
	I0116 03:23:30.415487 1011460 cri.go:89] found id: ""
	I0116 03:23:30.415498 1011460 logs.go:284] 1 containers: [997be6a446a806235150d69a5095c196d7a47ada007ba33985478f573234106d]
	I0116 03:23:30.415569 1011460 ssh_runner.go:195] Run: which crictl
	I0116 03:23:30.420117 1011460 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0116 03:23:30.420209 1011460 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0116 03:23:30.461056 1011460 cri.go:89] found id: ""
	I0116 03:23:30.461093 1011460 logs.go:284] 0 containers: []
	W0116 03:23:30.461105 1011460 logs.go:286] No container was found matching "kindnet"
	I0116 03:23:30.461114 1011460 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0116 03:23:30.461186 1011460 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0116 03:23:30.504581 1011460 cri.go:89] found id: "4a915cd4aa42fcabfc0d11618fe4a45d4b28fcbaec26574a133eea4ed0527d19"
	I0116 03:23:30.504616 1011460 cri.go:89] found id: ""
	I0116 03:23:30.504627 1011460 logs.go:284] 1 containers: [4a915cd4aa42fcabfc0d11618fe4a45d4b28fcbaec26574a133eea4ed0527d19]
	I0116 03:23:30.504698 1011460 ssh_runner.go:195] Run: which crictl
	I0116 03:23:30.509619 1011460 logs.go:123] Gathering logs for coredns [229310b5851cfb28499024703302e9d76a32a5a7dd165d38163bd4a7e4457058] ...
	I0116 03:23:30.509670 1011460 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 229310b5851cfb28499024703302e9d76a32a5a7dd165d38163bd4a7e4457058"
	I0116 03:23:30.553986 1011460 logs.go:123] Gathering logs for kube-controller-manager [997be6a446a806235150d69a5095c196d7a47ada007ba33985478f573234106d] ...
	I0116 03:23:30.554027 1011460 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 997be6a446a806235150d69a5095c196d7a47ada007ba33985478f573234106d"
	I0116 03:23:30.613360 1011460 logs.go:123] Gathering logs for CRI-O ...
	I0116 03:23:30.613415 1011460 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0116 03:23:31.049281 1011460 logs.go:123] Gathering logs for dmesg ...
	I0116 03:23:31.049331 1011460 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0116 03:23:31.067692 1011460 logs.go:123] Gathering logs for describe nodes ...
	I0116 03:23:31.067732 1011460 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0116 03:23:31.225415 1011460 logs.go:123] Gathering logs for kube-apiserver [f2403bf8a85e789f4307c4ad0dc429903a20a496b9a9e4ee152f4fb45edeaf5e] ...
	I0116 03:23:31.225457 1011460 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f2403bf8a85e789f4307c4ad0dc429903a20a496b9a9e4ee152f4fb45edeaf5e"
	I0116 03:23:31.288824 1011460 logs.go:123] Gathering logs for etcd [2abc1d37662ffc8c141014651481ef2b375ac1c303eff20d0fcb56cd604a2f8f] ...
	I0116 03:23:31.288865 1011460 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2abc1d37662ffc8c141014651481ef2b375ac1c303eff20d0fcb56cd604a2f8f"
	I0116 03:23:31.349273 1011460 logs.go:123] Gathering logs for container status ...
	I0116 03:23:31.349318 1011460 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0116 03:23:31.398655 1011460 logs.go:123] Gathering logs for kubelet ...
	I0116 03:23:31.398696 1011460 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0116 03:23:31.469496 1011460 logs.go:138] Found kubelet problem: Jan 16 03:19:25 no-preload-934668 kubelet[4294]: W0116 03:19:25.274014    4294 reflector.go:539] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-934668" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-934668' and this object
	W0116 03:23:31.469683 1011460 logs.go:138] Found kubelet problem: Jan 16 03:19:25 no-preload-934668 kubelet[4294]: E0116 03:19:25.274058    4294 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-934668" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-934668' and this object
	W0116 03:23:31.469882 1011460 logs.go:138] Found kubelet problem: Jan 16 03:19:25 no-preload-934668 kubelet[4294]: W0116 03:19:25.277138    4294 reflector.go:539] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-934668" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-934668' and this object
	W0116 03:23:31.470041 1011460 logs.go:138] Found kubelet problem: Jan 16 03:19:25 no-preload-934668 kubelet[4294]: E0116 03:19:25.277170    4294 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-934668" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-934668' and this object
	I0116 03:23:31.493488 1011460 logs.go:123] Gathering logs for kube-scheduler [63e8de06e9ec3ccb1ecfe2142fb84f65efabdec07da804a74ec55b57f677b021] ...
	I0116 03:23:31.493533 1011460 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 63e8de06e9ec3ccb1ecfe2142fb84f65efabdec07da804a74ec55b57f677b021"
	I0116 03:23:31.551159 1011460 logs.go:123] Gathering logs for kube-proxy [153d0b659aaa8533b9004f341817ab607005ff6cb1680ef017005a00267ffda8] ...
	I0116 03:23:31.551200 1011460 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 153d0b659aaa8533b9004f341817ab607005ff6cb1680ef017005a00267ffda8"
	I0116 03:23:31.590293 1011460 logs.go:123] Gathering logs for storage-provisioner [4a915cd4aa42fcabfc0d11618fe4a45d4b28fcbaec26574a133eea4ed0527d19] ...
	I0116 03:23:31.590434 1011460 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4a915cd4aa42fcabfc0d11618fe4a45d4b28fcbaec26574a133eea4ed0527d19"
	I0116 03:23:31.634337 1011460 out.go:309] Setting ErrFile to fd 2...
	I0116 03:23:31.634367 1011460 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0116 03:23:31.634430 1011460 out.go:239] X Problems detected in kubelet:
	W0116 03:23:31.634447 1011460 out.go:239]   Jan 16 03:19:25 no-preload-934668 kubelet[4294]: W0116 03:19:25.274014    4294 reflector.go:539] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-934668" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-934668' and this object
	W0116 03:23:31.634457 1011460 out.go:239]   Jan 16 03:19:25 no-preload-934668 kubelet[4294]: E0116 03:19:25.274058    4294 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-934668" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-934668' and this object
	W0116 03:23:31.634471 1011460 out.go:239]   Jan 16 03:19:25 no-preload-934668 kubelet[4294]: W0116 03:19:25.277138    4294 reflector.go:539] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-934668" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-934668' and this object
	W0116 03:23:31.634476 1011460 out.go:239]   Jan 16 03:19:25 no-preload-934668 kubelet[4294]: E0116 03:19:25.277170    4294 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-934668" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-934668' and this object
	I0116 03:23:31.634485 1011460 out.go:309] Setting ErrFile to fd 2...
	I0116 03:23:31.634490 1011460 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 03:23:41.635544 1011460 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:23:41.654207 1011460 api_server.go:72] duration metric: took 4m16.125890122s to wait for apiserver process to appear ...
	I0116 03:23:41.654244 1011460 api_server.go:88] waiting for apiserver healthz status ...
	I0116 03:23:41.654312 1011460 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0116 03:23:41.654391 1011460 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0116 03:23:41.704947 1011460 cri.go:89] found id: "f2403bf8a85e789f4307c4ad0dc429903a20a496b9a9e4ee152f4fb45edeaf5e"
	I0116 03:23:41.704976 1011460 cri.go:89] found id: ""
	I0116 03:23:41.704984 1011460 logs.go:284] 1 containers: [f2403bf8a85e789f4307c4ad0dc429903a20a496b9a9e4ee152f4fb45edeaf5e]
	I0116 03:23:41.705042 1011460 ssh_runner.go:195] Run: which crictl
	I0116 03:23:41.710602 1011460 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0116 03:23:41.710687 1011460 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0116 03:23:41.754322 1011460 cri.go:89] found id: "2abc1d37662ffc8c141014651481ef2b375ac1c303eff20d0fcb56cd604a2f8f"
	I0116 03:23:41.754356 1011460 cri.go:89] found id: ""
	I0116 03:23:41.754368 1011460 logs.go:284] 1 containers: [2abc1d37662ffc8c141014651481ef2b375ac1c303eff20d0fcb56cd604a2f8f]
	I0116 03:23:41.754437 1011460 ssh_runner.go:195] Run: which crictl
	I0116 03:23:41.760172 1011460 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0116 03:23:41.760283 1011460 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0116 03:23:41.810626 1011460 cri.go:89] found id: "229310b5851cfb28499024703302e9d76a32a5a7dd165d38163bd4a7e4457058"
	I0116 03:23:41.810664 1011460 cri.go:89] found id: ""
	I0116 03:23:41.810674 1011460 logs.go:284] 1 containers: [229310b5851cfb28499024703302e9d76a32a5a7dd165d38163bd4a7e4457058]
	I0116 03:23:41.810749 1011460 ssh_runner.go:195] Run: which crictl
	I0116 03:23:41.815588 1011460 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0116 03:23:41.815687 1011460 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0116 03:23:41.859547 1011460 cri.go:89] found id: "63e8de06e9ec3ccb1ecfe2142fb84f65efabdec07da804a74ec55b57f677b021"
	I0116 03:23:41.859573 1011460 cri.go:89] found id: ""
	I0116 03:23:41.859580 1011460 logs.go:284] 1 containers: [63e8de06e9ec3ccb1ecfe2142fb84f65efabdec07da804a74ec55b57f677b021]
	I0116 03:23:41.859637 1011460 ssh_runner.go:195] Run: which crictl
	I0116 03:23:41.864333 1011460 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0116 03:23:41.864416 1011460 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0116 03:23:41.914604 1011460 cri.go:89] found id: "153d0b659aaa8533b9004f341817ab607005ff6cb1680ef017005a00267ffda8"
	I0116 03:23:41.914638 1011460 cri.go:89] found id: ""
	I0116 03:23:41.914648 1011460 logs.go:284] 1 containers: [153d0b659aaa8533b9004f341817ab607005ff6cb1680ef017005a00267ffda8]
	I0116 03:23:41.914718 1011460 ssh_runner.go:195] Run: which crictl
	I0116 03:23:41.919459 1011460 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0116 03:23:41.919538 1011460 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0116 03:23:41.965709 1011460 cri.go:89] found id: "997be6a446a806235150d69a5095c196d7a47ada007ba33985478f573234106d"
	I0116 03:23:41.965751 1011460 cri.go:89] found id: ""
	I0116 03:23:41.965763 1011460 logs.go:284] 1 containers: [997be6a446a806235150d69a5095c196d7a47ada007ba33985478f573234106d]
	I0116 03:23:41.965857 1011460 ssh_runner.go:195] Run: which crictl
	I0116 03:23:41.970346 1011460 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0116 03:23:41.970445 1011460 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0116 03:23:42.017222 1011460 cri.go:89] found id: ""
	I0116 03:23:42.017253 1011460 logs.go:284] 0 containers: []
	W0116 03:23:42.017265 1011460 logs.go:286] No container was found matching "kindnet"
	I0116 03:23:42.017275 1011460 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0116 03:23:42.017341 1011460 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0116 03:23:42.065935 1011460 cri.go:89] found id: "4a915cd4aa42fcabfc0d11618fe4a45d4b28fcbaec26574a133eea4ed0527d19"
	I0116 03:23:42.065967 1011460 cri.go:89] found id: ""
	I0116 03:23:42.065977 1011460 logs.go:284] 1 containers: [4a915cd4aa42fcabfc0d11618fe4a45d4b28fcbaec26574a133eea4ed0527d19]
	I0116 03:23:42.066041 1011460 ssh_runner.go:195] Run: which crictl
	I0116 03:23:42.070695 1011460 logs.go:123] Gathering logs for CRI-O ...
	I0116 03:23:42.070722 1011460 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0116 03:23:42.440423 1011460 logs.go:123] Gathering logs for kubelet ...
	I0116 03:23:42.440483 1011460 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0116 03:23:42.514598 1011460 logs.go:138] Found kubelet problem: Jan 16 03:19:25 no-preload-934668 kubelet[4294]: W0116 03:19:25.274014    4294 reflector.go:539] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-934668" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-934668' and this object
	W0116 03:23:42.514770 1011460 logs.go:138] Found kubelet problem: Jan 16 03:19:25 no-preload-934668 kubelet[4294]: E0116 03:19:25.274058    4294 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-934668" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-934668' and this object
	W0116 03:23:42.514914 1011460 logs.go:138] Found kubelet problem: Jan 16 03:19:25 no-preload-934668 kubelet[4294]: W0116 03:19:25.277138    4294 reflector.go:539] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-934668" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-934668' and this object
	W0116 03:23:42.515087 1011460 logs.go:138] Found kubelet problem: Jan 16 03:19:25 no-preload-934668 kubelet[4294]: E0116 03:19:25.277170    4294 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-934668" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-934668' and this object
	I0116 03:23:42.539532 1011460 logs.go:123] Gathering logs for describe nodes ...
	I0116 03:23:42.539575 1011460 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0116 03:23:42.708733 1011460 logs.go:123] Gathering logs for coredns [229310b5851cfb28499024703302e9d76a32a5a7dd165d38163bd4a7e4457058] ...
	I0116 03:23:42.708775 1011460 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 229310b5851cfb28499024703302e9d76a32a5a7dd165d38163bd4a7e4457058"
	I0116 03:23:42.792841 1011460 logs.go:123] Gathering logs for kube-scheduler [63e8de06e9ec3ccb1ecfe2142fb84f65efabdec07da804a74ec55b57f677b021] ...
	I0116 03:23:42.792886 1011460 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 63e8de06e9ec3ccb1ecfe2142fb84f65efabdec07da804a74ec55b57f677b021"
	I0116 03:23:42.860086 1011460 logs.go:123] Gathering logs for kube-proxy [153d0b659aaa8533b9004f341817ab607005ff6cb1680ef017005a00267ffda8] ...
	I0116 03:23:42.860130 1011460 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 153d0b659aaa8533b9004f341817ab607005ff6cb1680ef017005a00267ffda8"
	I0116 03:23:42.906116 1011460 logs.go:123] Gathering logs for kube-controller-manager [997be6a446a806235150d69a5095c196d7a47ada007ba33985478f573234106d] ...
	I0116 03:23:42.906156 1011460 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 997be6a446a806235150d69a5095c196d7a47ada007ba33985478f573234106d"
	I0116 03:23:42.962172 1011460 logs.go:123] Gathering logs for storage-provisioner [4a915cd4aa42fcabfc0d11618fe4a45d4b28fcbaec26574a133eea4ed0527d19] ...
	I0116 03:23:42.962220 1011460 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4a915cd4aa42fcabfc0d11618fe4a45d4b28fcbaec26574a133eea4ed0527d19"
	I0116 03:23:43.001097 1011460 logs.go:123] Gathering logs for dmesg ...
	I0116 03:23:43.001133 1011460 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0116 03:23:43.017487 1011460 logs.go:123] Gathering logs for kube-apiserver [f2403bf8a85e789f4307c4ad0dc429903a20a496b9a9e4ee152f4fb45edeaf5e] ...
	I0116 03:23:43.017533 1011460 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f2403bf8a85e789f4307c4ad0dc429903a20a496b9a9e4ee152f4fb45edeaf5e"
	I0116 03:23:43.077368 1011460 logs.go:123] Gathering logs for etcd [2abc1d37662ffc8c141014651481ef2b375ac1c303eff20d0fcb56cd604a2f8f] ...
	I0116 03:23:43.077408 1011460 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2abc1d37662ffc8c141014651481ef2b375ac1c303eff20d0fcb56cd604a2f8f"
	I0116 03:23:43.125553 1011460 logs.go:123] Gathering logs for container status ...
	I0116 03:23:43.125587 1011460 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0116 03:23:43.175165 1011460 out.go:309] Setting ErrFile to fd 2...
	I0116 03:23:43.175195 1011460 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0116 03:23:43.175256 1011460 out.go:239] X Problems detected in kubelet:
	W0116 03:23:43.175268 1011460 out.go:239]   Jan 16 03:19:25 no-preload-934668 kubelet[4294]: W0116 03:19:25.274014    4294 reflector.go:539] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-934668" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-934668' and this object
	W0116 03:23:43.175279 1011460 out.go:239]   Jan 16 03:19:25 no-preload-934668 kubelet[4294]: E0116 03:19:25.274058    4294 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-934668" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-934668' and this object
	W0116 03:23:43.175292 1011460 out.go:239]   Jan 16 03:19:25 no-preload-934668 kubelet[4294]: W0116 03:19:25.277138    4294 reflector.go:539] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-934668" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-934668' and this object
	W0116 03:23:43.175300 1011460 out.go:239]   Jan 16 03:19:25 no-preload-934668 kubelet[4294]: E0116 03:19:25.277170    4294 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-934668" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-934668' and this object
	I0116 03:23:43.175308 1011460 out.go:309] Setting ErrFile to fd 2...
	I0116 03:23:43.175316 1011460 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 03:23:53.176994 1011460 api_server.go:253] Checking apiserver healthz at https://192.168.50.29:8443/healthz ...
	I0116 03:23:53.183515 1011460 api_server.go:279] https://192.168.50.29:8443/healthz returned 200:
	ok
	I0116 03:23:53.185020 1011460 api_server.go:141] control plane version: v1.29.0-rc.2
	I0116 03:23:53.185050 1011460 api_server.go:131] duration metric: took 11.530797787s to wait for apiserver health ...
	I0116 03:23:53.185061 1011460 system_pods.go:43] waiting for kube-system pods to appear ...
	I0116 03:23:53.185092 1011460 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0116 03:23:53.185148 1011460 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0116 03:23:53.234245 1011460 cri.go:89] found id: "f2403bf8a85e789f4307c4ad0dc429903a20a496b9a9e4ee152f4fb45edeaf5e"
	I0116 03:23:53.234274 1011460 cri.go:89] found id: ""
	I0116 03:23:53.234284 1011460 logs.go:284] 1 containers: [f2403bf8a85e789f4307c4ad0dc429903a20a496b9a9e4ee152f4fb45edeaf5e]
	I0116 03:23:53.234356 1011460 ssh_runner.go:195] Run: which crictl
	I0116 03:23:53.239078 1011460 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0116 03:23:53.239169 1011460 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0116 03:23:53.286989 1011460 cri.go:89] found id: "2abc1d37662ffc8c141014651481ef2b375ac1c303eff20d0fcb56cd604a2f8f"
	I0116 03:23:53.287021 1011460 cri.go:89] found id: ""
	I0116 03:23:53.287031 1011460 logs.go:284] 1 containers: [2abc1d37662ffc8c141014651481ef2b375ac1c303eff20d0fcb56cd604a2f8f]
	I0116 03:23:53.287106 1011460 ssh_runner.go:195] Run: which crictl
	I0116 03:23:53.291809 1011460 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0116 03:23:53.291898 1011460 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0116 03:23:53.342514 1011460 cri.go:89] found id: "229310b5851cfb28499024703302e9d76a32a5a7dd165d38163bd4a7e4457058"
	I0116 03:23:53.342549 1011460 cri.go:89] found id: ""
	I0116 03:23:53.342560 1011460 logs.go:284] 1 containers: [229310b5851cfb28499024703302e9d76a32a5a7dd165d38163bd4a7e4457058]
	I0116 03:23:53.342644 1011460 ssh_runner.go:195] Run: which crictl
	I0116 03:23:53.347443 1011460 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0116 03:23:53.347536 1011460 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0116 03:23:53.407101 1011460 cri.go:89] found id: "63e8de06e9ec3ccb1ecfe2142fb84f65efabdec07da804a74ec55b57f677b021"
	I0116 03:23:53.407129 1011460 cri.go:89] found id: ""
	I0116 03:23:53.407139 1011460 logs.go:284] 1 containers: [63e8de06e9ec3ccb1ecfe2142fb84f65efabdec07da804a74ec55b57f677b021]
	I0116 03:23:53.407204 1011460 ssh_runner.go:195] Run: which crictl
	I0116 03:23:53.411444 1011460 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0116 03:23:53.411526 1011460 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0116 03:23:53.451514 1011460 cri.go:89] found id: "153d0b659aaa8533b9004f341817ab607005ff6cb1680ef017005a00267ffda8"
	I0116 03:23:53.451538 1011460 cri.go:89] found id: ""
	I0116 03:23:53.451545 1011460 logs.go:284] 1 containers: [153d0b659aaa8533b9004f341817ab607005ff6cb1680ef017005a00267ffda8]
	I0116 03:23:53.451613 1011460 ssh_runner.go:195] Run: which crictl
	I0116 03:23:53.455819 1011460 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0116 03:23:53.455907 1011460 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0116 03:23:53.498341 1011460 cri.go:89] found id: "997be6a446a806235150d69a5095c196d7a47ada007ba33985478f573234106d"
	I0116 03:23:53.498372 1011460 cri.go:89] found id: ""
	I0116 03:23:53.498385 1011460 logs.go:284] 1 containers: [997be6a446a806235150d69a5095c196d7a47ada007ba33985478f573234106d]
	I0116 03:23:53.498456 1011460 ssh_runner.go:195] Run: which crictl
	I0116 03:23:53.503007 1011460 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0116 03:23:53.503075 1011460 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0116 03:23:53.549549 1011460 cri.go:89] found id: ""
	I0116 03:23:53.549585 1011460 logs.go:284] 0 containers: []
	W0116 03:23:53.549597 1011460 logs.go:286] No container was found matching "kindnet"
	I0116 03:23:53.549606 1011460 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0116 03:23:53.549676 1011460 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0116 03:23:53.590624 1011460 cri.go:89] found id: "4a915cd4aa42fcabfc0d11618fe4a45d4b28fcbaec26574a133eea4ed0527d19"
	I0116 03:23:53.590655 1011460 cri.go:89] found id: ""
	I0116 03:23:53.590672 1011460 logs.go:284] 1 containers: [4a915cd4aa42fcabfc0d11618fe4a45d4b28fcbaec26574a133eea4ed0527d19]
	I0116 03:23:53.590743 1011460 ssh_runner.go:195] Run: which crictl
	I0116 03:23:53.594912 1011460 logs.go:123] Gathering logs for etcd [2abc1d37662ffc8c141014651481ef2b375ac1c303eff20d0fcb56cd604a2f8f] ...
	I0116 03:23:53.594950 1011460 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2abc1d37662ffc8c141014651481ef2b375ac1c303eff20d0fcb56cd604a2f8f"
	I0116 03:23:53.644842 1011460 logs.go:123] Gathering logs for CRI-O ...
	I0116 03:23:53.644885 1011460 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0116 03:23:54.036154 1011460 logs.go:123] Gathering logs for container status ...
	I0116 03:23:54.036221 1011460 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0116 03:23:54.096374 1011460 logs.go:123] Gathering logs for kubelet ...
	I0116 03:23:54.096416 1011460 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0116 03:23:54.170840 1011460 logs.go:138] Found kubelet problem: Jan 16 03:19:25 no-preload-934668 kubelet[4294]: W0116 03:19:25.274014    4294 reflector.go:539] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-934668" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-934668' and this object
	W0116 03:23:54.171084 1011460 logs.go:138] Found kubelet problem: Jan 16 03:19:25 no-preload-934668 kubelet[4294]: E0116 03:19:25.274058    4294 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-934668" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-934668' and this object
	W0116 03:23:54.171231 1011460 logs.go:138] Found kubelet problem: Jan 16 03:19:25 no-preload-934668 kubelet[4294]: W0116 03:19:25.277138    4294 reflector.go:539] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-934668" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-934668' and this object
	W0116 03:23:54.171388 1011460 logs.go:138] Found kubelet problem: Jan 16 03:19:25 no-preload-934668 kubelet[4294]: E0116 03:19:25.277170    4294 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-934668" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-934668' and this object
	I0116 03:23:54.197037 1011460 logs.go:123] Gathering logs for kube-apiserver [f2403bf8a85e789f4307c4ad0dc429903a20a496b9a9e4ee152f4fb45edeaf5e] ...
	I0116 03:23:54.197086 1011460 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f2403bf8a85e789f4307c4ad0dc429903a20a496b9a9e4ee152f4fb45edeaf5e"
	I0116 03:23:54.254502 1011460 logs.go:123] Gathering logs for coredns [229310b5851cfb28499024703302e9d76a32a5a7dd165d38163bd4a7e4457058] ...
	I0116 03:23:54.254558 1011460 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 229310b5851cfb28499024703302e9d76a32a5a7dd165d38163bd4a7e4457058"
	I0116 03:23:54.296951 1011460 logs.go:123] Gathering logs for kube-scheduler [63e8de06e9ec3ccb1ecfe2142fb84f65efabdec07da804a74ec55b57f677b021] ...
	I0116 03:23:54.296999 1011460 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 63e8de06e9ec3ccb1ecfe2142fb84f65efabdec07da804a74ec55b57f677b021"
	I0116 03:23:54.353946 1011460 logs.go:123] Gathering logs for kube-proxy [153d0b659aaa8533b9004f341817ab607005ff6cb1680ef017005a00267ffda8] ...
	I0116 03:23:54.354001 1011460 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 153d0b659aaa8533b9004f341817ab607005ff6cb1680ef017005a00267ffda8"
	I0116 03:23:54.399575 1011460 logs.go:123] Gathering logs for kube-controller-manager [997be6a446a806235150d69a5095c196d7a47ada007ba33985478f573234106d] ...
	I0116 03:23:54.399609 1011460 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 997be6a446a806235150d69a5095c196d7a47ada007ba33985478f573234106d"
	I0116 03:23:54.463603 1011460 logs.go:123] Gathering logs for storage-provisioner [4a915cd4aa42fcabfc0d11618fe4a45d4b28fcbaec26574a133eea4ed0527d19] ...
	I0116 03:23:54.463643 1011460 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4a915cd4aa42fcabfc0d11618fe4a45d4b28fcbaec26574a133eea4ed0527d19"
	I0116 03:23:54.508557 1011460 logs.go:123] Gathering logs for dmesg ...
	I0116 03:23:54.508594 1011460 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0116 03:23:54.522542 1011460 logs.go:123] Gathering logs for describe nodes ...
	I0116 03:23:54.522574 1011460 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0116 03:23:54.653996 1011460 out.go:309] Setting ErrFile to fd 2...
	I0116 03:23:54.654029 1011460 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0116 03:23:54.654095 1011460 out.go:239] X Problems detected in kubelet:
	W0116 03:23:54.654115 1011460 out.go:239]   Jan 16 03:19:25 no-preload-934668 kubelet[4294]: W0116 03:19:25.274014    4294 reflector.go:539] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-934668" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-934668' and this object
	W0116 03:23:54.654128 1011460 out.go:239]   Jan 16 03:19:25 no-preload-934668 kubelet[4294]: E0116 03:19:25.274058    4294 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-934668" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-934668' and this object
	W0116 03:23:54.654140 1011460 out.go:239]   Jan 16 03:19:25 no-preload-934668 kubelet[4294]: W0116 03:19:25.277138    4294 reflector.go:539] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-934668" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-934668' and this object
	W0116 03:23:54.654148 1011460 out.go:239]   Jan 16 03:19:25 no-preload-934668 kubelet[4294]: E0116 03:19:25.277170    4294 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-934668" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-934668' and this object
	I0116 03:23:54.654158 1011460 out.go:309] Setting ErrFile to fd 2...
	I0116 03:23:54.654167 1011460 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 03:24:04.664925 1011460 system_pods.go:59] 8 kube-system pods found
	I0116 03:24:04.664971 1011460 system_pods.go:61] "coredns-76f75df574-k2kc7" [d05aee05-aff7-4500-b656-8f66a3f622d2] Running
	I0116 03:24:04.664978 1011460 system_pods.go:61] "etcd-no-preload-934668" [b927b4df-f865-400c-9277-32778f7c5e30] Running
	I0116 03:24:04.664986 1011460 system_pods.go:61] "kube-apiserver-no-preload-934668" [648abde5-ec7c-4fd4-81e5-734ac6e631fc] Running
	I0116 03:24:04.664994 1011460 system_pods.go:61] "kube-controller-manager-no-preload-934668" [8a568dfa-e657-47e8-b369-c02a31271e58] Running
	I0116 03:24:04.664998 1011460 system_pods.go:61] "kube-proxy-fr424" [f24ae333-7f56-47bf-b66f-3192010a2cc4] Running
	I0116 03:24:04.665003 1011460 system_pods.go:61] "kube-scheduler-no-preload-934668" [fc295053-1d78-4f15-91f8-41330bf47c1a] Running
	I0116 03:24:04.665013 1011460 system_pods.go:61] "metrics-server-57f55c9bc5-6w2t7" [5169514b-c507-4e5e-b607-6806f6e32801] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:24:04.665019 1011460 system_pods.go:61] "storage-provisioner" [eb4f416a-8bdc-4a7c-bea1-14015339520b] Running
	I0116 03:24:04.665027 1011460 system_pods.go:74] duration metric: took 11.479959039s to wait for pod list to return data ...
	I0116 03:24:04.665042 1011460 default_sa.go:34] waiting for default service account to be created ...
	I0116 03:24:04.668183 1011460 default_sa.go:45] found service account: "default"
	I0116 03:24:04.668217 1011460 default_sa.go:55] duration metric: took 3.167177ms for default service account to be created ...
	I0116 03:24:04.668228 1011460 system_pods.go:116] waiting for k8s-apps to be running ...
	I0116 03:24:04.674701 1011460 system_pods.go:86] 8 kube-system pods found
	I0116 03:24:04.674736 1011460 system_pods.go:89] "coredns-76f75df574-k2kc7" [d05aee05-aff7-4500-b656-8f66a3f622d2] Running
	I0116 03:24:04.674742 1011460 system_pods.go:89] "etcd-no-preload-934668" [b927b4df-f865-400c-9277-32778f7c5e30] Running
	I0116 03:24:04.674747 1011460 system_pods.go:89] "kube-apiserver-no-preload-934668" [648abde5-ec7c-4fd4-81e5-734ac6e631fc] Running
	I0116 03:24:04.674752 1011460 system_pods.go:89] "kube-controller-manager-no-preload-934668" [8a568dfa-e657-47e8-b369-c02a31271e58] Running
	I0116 03:24:04.674756 1011460 system_pods.go:89] "kube-proxy-fr424" [f24ae333-7f56-47bf-b66f-3192010a2cc4] Running
	I0116 03:24:04.674760 1011460 system_pods.go:89] "kube-scheduler-no-preload-934668" [fc295053-1d78-4f15-91f8-41330bf47c1a] Running
	I0116 03:24:04.674766 1011460 system_pods.go:89] "metrics-server-57f55c9bc5-6w2t7" [5169514b-c507-4e5e-b607-6806f6e32801] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:24:04.674771 1011460 system_pods.go:89] "storage-provisioner" [eb4f416a-8bdc-4a7c-bea1-14015339520b] Running
	I0116 03:24:04.674780 1011460 system_pods.go:126] duration metric: took 6.545541ms to wait for k8s-apps to be running ...
	I0116 03:24:04.674794 1011460 system_svc.go:44] waiting for kubelet service to be running ....
	I0116 03:24:04.674845 1011460 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 03:24:04.692060 1011460 system_svc.go:56] duration metric: took 17.248436ms WaitForService to wait for kubelet.
	I0116 03:24:04.692099 1011460 kubeadm.go:581] duration metric: took 4m39.163790794s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0116 03:24:04.692129 1011460 node_conditions.go:102] verifying NodePressure condition ...
	I0116 03:24:04.696664 1011460 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 03:24:04.696709 1011460 node_conditions.go:123] node cpu capacity is 2
	I0116 03:24:04.696728 1011460 node_conditions.go:105] duration metric: took 4.592869ms to run NodePressure ...
	I0116 03:24:04.696745 1011460 start.go:228] waiting for startup goroutines ...
	I0116 03:24:04.696755 1011460 start.go:233] waiting for cluster config update ...
	I0116 03:24:04.696770 1011460 start.go:242] writing updated cluster config ...
	I0116 03:24:04.697135 1011460 ssh_runner.go:195] Run: rm -f paused
	I0116 03:24:04.750649 1011460 start.go:600] kubectl: 1.29.0, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0116 03:24:04.752669 1011460 out.go:177] * Done! kubectl is now configured to use "no-preload-934668" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	-- Journal begins at Tue 2024-01-16 03:13:01 UTC, ends at Tue 2024-01-16 03:29:07 UTC. --
	Jan 16 03:29:07 old-k8s-version-788237 crio[714]: time="2024-01-16 03:29:07.326648579Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705375747326623468,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:114472,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=86c9d145-d4b8-4dfa-9343-bf015d025581 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 03:29:07 old-k8s-version-788237 crio[714]: time="2024-01-16 03:29:07.327383655Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=c3adee41-c256-4a5a-8050-d4edd13bc90c name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 03:29:07 old-k8s-version-788237 crio[714]: time="2024-01-16 03:29:07.327434545Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=c3adee41-c256-4a5a-8050-d4edd13bc90c name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 03:29:07 old-k8s-version-788237 crio[714]: time="2024-01-16 03:29:07.328257112Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:39f3d7fe5482f24d0405541c570378b116cdad4537f37ecd47a184e266678440,PodSandboxId:a7674dd11cbc95573f5a04933204bad08362a263485fa925f821ab54ea8e422a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705375136967703370,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa5fac91-1606-4716-a04a-18e9d80c926b,},Annotations:map[string]string{io.kubernetes.container.hash: 3311cea0,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccddac0572d056ac2605ff0ddb55170ab09f4e0657bcb57b1a8faab6d748f1ae,PodSandboxId:b9cd1654d0aa8be03fd80ca5af350b324f6196fea89934eb5d6d5dd938c924ce,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1705375136457781786,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tv7gz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1bf5e59-b2ae-489c-8297-25ad7c456303,},Annotations:map[string]string{io.kubernetes.container.hash: 52d989bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd98624191993b478451414500a1a65222fd5fa43a8947d332d980468fc0a67a,PodSandboxId:be8a8f92ce56eb65b9b241795b17f6a11b3acf0d096a5a926884629c8906873b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1705375135624333509,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-qmzl6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e4b23ca-c18b-4158-b15b-df53326b384c,},Annotations:map[string]string{io.kubernetes.container.hash: 4a86af59,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e327d721f3f2f3b4ab0bbb5f1278403fea05f4be2b069c149c00c4a8c8a45fff,PodSandboxId:8db08f325ea8f04c81080318269bee2998c25d84a01408fb42c9a4101efad5b1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1705375109035882891,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-788237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c003f74fd30a9694a059c0d4138e96d,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 1a366b22,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c8ff8ca133a1b2c457758f11318941da96fd0179f38c83dff5592244df7855b,PodSandboxId:7788241941dcfa5597a07cb17ab04e7385db133cde9f7d3fcb79b38c69f99b44,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1705375107770737585,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-788237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437
bcb4e,},Annotations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0478c9a69e8126feecf553fe27b1082236f91220b1da2fa67d23a45755014ee9,PodSandboxId:d6c1775cb397c96897a8de69200128e362f2c5848fb82bc56c1bda139fd469de,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1705375107533635031,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-788237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Ann
otations:map[string]string{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f47fadd92bab56c45d6d368728025ede6d264522c3f0cfd263543f5d68ae0e2,PodSandboxId:bb471801af694890715cc7a3b8d88168a940912300e3f82a83b4293f06d11dea,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1705375106921765751,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-788237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6230b846da04f59ee8bd2493df23aee6,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 96857140,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c79c8713cf40509eb13eda6bc933e764ae4bb84455c0a1ed89ca21d32bc667cc,PodSandboxId:bb471801af694890715cc7a3b8d88168a940912300e3f82a83b4293f06d11dea,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_EXITED,CreatedAt:1705374811839259892,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-788237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6230b846da04f59ee8bd2493df23aee6,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 96857140,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=c3adee41-c256-4a5a-8050-d4edd13bc90c name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 03:29:07 old-k8s-version-788237 crio[714]: time="2024-01-16 03:29:07.367370176Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=0773c673-9125-420e-85e1-3badb163e23f name=/runtime.v1.RuntimeService/Version
	Jan 16 03:29:07 old-k8s-version-788237 crio[714]: time="2024-01-16 03:29:07.367550908Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=0773c673-9125-420e-85e1-3badb163e23f name=/runtime.v1.RuntimeService/Version
	Jan 16 03:29:07 old-k8s-version-788237 crio[714]: time="2024-01-16 03:29:07.369172999Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=41335303-91f3-4599-aabc-907abced51b0 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 03:29:07 old-k8s-version-788237 crio[714]: time="2024-01-16 03:29:07.369661836Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705375747369646197,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:114472,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=41335303-91f3-4599-aabc-907abced51b0 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 03:29:07 old-k8s-version-788237 crio[714]: time="2024-01-16 03:29:07.370416882Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=cc0f713c-04d3-4784-b26c-6dafafd040e7 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 03:29:07 old-k8s-version-788237 crio[714]: time="2024-01-16 03:29:07.370570692Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=cc0f713c-04d3-4784-b26c-6dafafd040e7 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 03:29:07 old-k8s-version-788237 crio[714]: time="2024-01-16 03:29:07.370764724Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:39f3d7fe5482f24d0405541c570378b116cdad4537f37ecd47a184e266678440,PodSandboxId:a7674dd11cbc95573f5a04933204bad08362a263485fa925f821ab54ea8e422a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705375136967703370,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa5fac91-1606-4716-a04a-18e9d80c926b,},Annotations:map[string]string{io.kubernetes.container.hash: 3311cea0,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccddac0572d056ac2605ff0ddb55170ab09f4e0657bcb57b1a8faab6d748f1ae,PodSandboxId:b9cd1654d0aa8be03fd80ca5af350b324f6196fea89934eb5d6d5dd938c924ce,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1705375136457781786,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tv7gz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1bf5e59-b2ae-489c-8297-25ad7c456303,},Annotations:map[string]string{io.kubernetes.container.hash: 52d989bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd98624191993b478451414500a1a65222fd5fa43a8947d332d980468fc0a67a,PodSandboxId:be8a8f92ce56eb65b9b241795b17f6a11b3acf0d096a5a926884629c8906873b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1705375135624333509,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-qmzl6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e4b23ca-c18b-4158-b15b-df53326b384c,},Annotations:map[string]string{io.kubernetes.container.hash: 4a86af59,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e327d721f3f2f3b4ab0bbb5f1278403fea05f4be2b069c149c00c4a8c8a45fff,PodSandboxId:8db08f325ea8f04c81080318269bee2998c25d84a01408fb42c9a4101efad5b1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1705375109035882891,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-788237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c003f74fd30a9694a059c0d4138e96d,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 1a366b22,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c8ff8ca133a1b2c457758f11318941da96fd0179f38c83dff5592244df7855b,PodSandboxId:7788241941dcfa5597a07cb17ab04e7385db133cde9f7d3fcb79b38c69f99b44,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1705375107770737585,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-788237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437
bcb4e,},Annotations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0478c9a69e8126feecf553fe27b1082236f91220b1da2fa67d23a45755014ee9,PodSandboxId:d6c1775cb397c96897a8de69200128e362f2c5848fb82bc56c1bda139fd469de,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1705375107533635031,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-788237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Ann
otations:map[string]string{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f47fadd92bab56c45d6d368728025ede6d264522c3f0cfd263543f5d68ae0e2,PodSandboxId:bb471801af694890715cc7a3b8d88168a940912300e3f82a83b4293f06d11dea,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1705375106921765751,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-788237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6230b846da04f59ee8bd2493df23aee6,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 96857140,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c79c8713cf40509eb13eda6bc933e764ae4bb84455c0a1ed89ca21d32bc667cc,PodSandboxId:bb471801af694890715cc7a3b8d88168a940912300e3f82a83b4293f06d11dea,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_EXITED,CreatedAt:1705374811839259892,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-788237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6230b846da04f59ee8bd2493df23aee6,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 96857140,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=cc0f713c-04d3-4784-b26c-6dafafd040e7 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 03:29:07 old-k8s-version-788237 crio[714]: time="2024-01-16 03:29:07.414515483Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=c40fcc08-fe8b-4a6d-b5bf-a68c21b1b5ba name=/runtime.v1.RuntimeService/Version
	Jan 16 03:29:07 old-k8s-version-788237 crio[714]: time="2024-01-16 03:29:07.414599367Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=c40fcc08-fe8b-4a6d-b5bf-a68c21b1b5ba name=/runtime.v1.RuntimeService/Version
	Jan 16 03:29:07 old-k8s-version-788237 crio[714]: time="2024-01-16 03:29:07.415773355Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=a307437e-a2b0-4f56-b62b-053af99d48eb name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 03:29:07 old-k8s-version-788237 crio[714]: time="2024-01-16 03:29:07.416288544Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705375747416271213,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:114472,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=a307437e-a2b0-4f56-b62b-053af99d48eb name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 03:29:07 old-k8s-version-788237 crio[714]: time="2024-01-16 03:29:07.416948142Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=0ec01c0e-c218-486a-8245-6c4f3168556a name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 03:29:07 old-k8s-version-788237 crio[714]: time="2024-01-16 03:29:07.417026465Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=0ec01c0e-c218-486a-8245-6c4f3168556a name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 03:29:07 old-k8s-version-788237 crio[714]: time="2024-01-16 03:29:07.417619574Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:39f3d7fe5482f24d0405541c570378b116cdad4537f37ecd47a184e266678440,PodSandboxId:a7674dd11cbc95573f5a04933204bad08362a263485fa925f821ab54ea8e422a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705375136967703370,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa5fac91-1606-4716-a04a-18e9d80c926b,},Annotations:map[string]string{io.kubernetes.container.hash: 3311cea0,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccddac0572d056ac2605ff0ddb55170ab09f4e0657bcb57b1a8faab6d748f1ae,PodSandboxId:b9cd1654d0aa8be03fd80ca5af350b324f6196fea89934eb5d6d5dd938c924ce,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1705375136457781786,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tv7gz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1bf5e59-b2ae-489c-8297-25ad7c456303,},Annotations:map[string]string{io.kubernetes.container.hash: 52d989bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd98624191993b478451414500a1a65222fd5fa43a8947d332d980468fc0a67a,PodSandboxId:be8a8f92ce56eb65b9b241795b17f6a11b3acf0d096a5a926884629c8906873b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1705375135624333509,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-qmzl6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e4b23ca-c18b-4158-b15b-df53326b384c,},Annotations:map[string]string{io.kubernetes.container.hash: 4a86af59,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e327d721f3f2f3b4ab0bbb5f1278403fea05f4be2b069c149c00c4a8c8a45fff,PodSandboxId:8db08f325ea8f04c81080318269bee2998c25d84a01408fb42c9a4101efad5b1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1705375109035882891,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-788237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c003f74fd30a9694a059c0d4138e96d,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 1a366b22,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c8ff8ca133a1b2c457758f11318941da96fd0179f38c83dff5592244df7855b,PodSandboxId:7788241941dcfa5597a07cb17ab04e7385db133cde9f7d3fcb79b38c69f99b44,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1705375107770737585,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-788237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437
bcb4e,},Annotations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0478c9a69e8126feecf553fe27b1082236f91220b1da2fa67d23a45755014ee9,PodSandboxId:d6c1775cb397c96897a8de69200128e362f2c5848fb82bc56c1bda139fd469de,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1705375107533635031,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-788237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Ann
otations:map[string]string{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f47fadd92bab56c45d6d368728025ede6d264522c3f0cfd263543f5d68ae0e2,PodSandboxId:bb471801af694890715cc7a3b8d88168a940912300e3f82a83b4293f06d11dea,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1705375106921765751,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-788237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6230b846da04f59ee8bd2493df23aee6,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 96857140,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c79c8713cf40509eb13eda6bc933e764ae4bb84455c0a1ed89ca21d32bc667cc,PodSandboxId:bb471801af694890715cc7a3b8d88168a940912300e3f82a83b4293f06d11dea,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_EXITED,CreatedAt:1705374811839259892,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-788237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6230b846da04f59ee8bd2493df23aee6,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 96857140,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=0ec01c0e-c218-486a-8245-6c4f3168556a name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 03:29:07 old-k8s-version-788237 crio[714]: time="2024-01-16 03:29:07.456968703Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=6305020c-9bd8-4d43-bb3e-4e3d9f4d9d80 name=/runtime.v1.RuntimeService/Version
	Jan 16 03:29:07 old-k8s-version-788237 crio[714]: time="2024-01-16 03:29:07.457078317Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=6305020c-9bd8-4d43-bb3e-4e3d9f4d9d80 name=/runtime.v1.RuntimeService/Version
	Jan 16 03:29:07 old-k8s-version-788237 crio[714]: time="2024-01-16 03:29:07.458839201Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=3c6cdc11-fa2d-43bf-9854-b88d4b0ed505 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 03:29:07 old-k8s-version-788237 crio[714]: time="2024-01-16 03:29:07.459314952Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705375747459298543,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:114472,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=3c6cdc11-fa2d-43bf-9854-b88d4b0ed505 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 03:29:07 old-k8s-version-788237 crio[714]: time="2024-01-16 03:29:07.459933970Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=f943aa99-2e24-421e-a391-a41d10f682d9 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 03:29:07 old-k8s-version-788237 crio[714]: time="2024-01-16 03:29:07.460028679Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=f943aa99-2e24-421e-a391-a41d10f682d9 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 03:29:07 old-k8s-version-788237 crio[714]: time="2024-01-16 03:29:07.460223918Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:39f3d7fe5482f24d0405541c570378b116cdad4537f37ecd47a184e266678440,PodSandboxId:a7674dd11cbc95573f5a04933204bad08362a263485fa925f821ab54ea8e422a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705375136967703370,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa5fac91-1606-4716-a04a-18e9d80c926b,},Annotations:map[string]string{io.kubernetes.container.hash: 3311cea0,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccddac0572d056ac2605ff0ddb55170ab09f4e0657bcb57b1a8faab6d748f1ae,PodSandboxId:b9cd1654d0aa8be03fd80ca5af350b324f6196fea89934eb5d6d5dd938c924ce,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1705375136457781786,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tv7gz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1bf5e59-b2ae-489c-8297-25ad7c456303,},Annotations:map[string]string{io.kubernetes.container.hash: 52d989bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd98624191993b478451414500a1a65222fd5fa43a8947d332d980468fc0a67a,PodSandboxId:be8a8f92ce56eb65b9b241795b17f6a11b3acf0d096a5a926884629c8906873b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1705375135624333509,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-qmzl6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e4b23ca-c18b-4158-b15b-df53326b384c,},Annotations:map[string]string{io.kubernetes.container.hash: 4a86af59,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e327d721f3f2f3b4ab0bbb5f1278403fea05f4be2b069c149c00c4a8c8a45fff,PodSandboxId:8db08f325ea8f04c81080318269bee2998c25d84a01408fb42c9a4101efad5b1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1705375109035882891,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-788237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c003f74fd30a9694a059c0d4138e96d,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 1a366b22,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c8ff8ca133a1b2c457758f11318941da96fd0179f38c83dff5592244df7855b,PodSandboxId:7788241941dcfa5597a07cb17ab04e7385db133cde9f7d3fcb79b38c69f99b44,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1705375107770737585,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-788237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437
bcb4e,},Annotations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0478c9a69e8126feecf553fe27b1082236f91220b1da2fa67d23a45755014ee9,PodSandboxId:d6c1775cb397c96897a8de69200128e362f2c5848fb82bc56c1bda139fd469de,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1705375107533635031,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-788237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Ann
otations:map[string]string{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f47fadd92bab56c45d6d368728025ede6d264522c3f0cfd263543f5d68ae0e2,PodSandboxId:bb471801af694890715cc7a3b8d88168a940912300e3f82a83b4293f06d11dea,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1705375106921765751,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-788237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6230b846da04f59ee8bd2493df23aee6,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 96857140,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c79c8713cf40509eb13eda6bc933e764ae4bb84455c0a1ed89ca21d32bc667cc,PodSandboxId:bb471801af694890715cc7a3b8d88168a940912300e3f82a83b4293f06d11dea,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_EXITED,CreatedAt:1705374811839259892,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-788237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6230b846da04f59ee8bd2493df23aee6,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 96857140,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=f943aa99-2e24-421e-a391-a41d10f682d9 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	39f3d7fe5482f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   10 minutes ago      Running             storage-provisioner       0                   a7674dd11cbc9       storage-provisioner
	ccddac0572d05       c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384   10 minutes ago      Running             kube-proxy                0                   b9cd1654d0aa8       kube-proxy-tv7gz
	cd98624191993       bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b   10 minutes ago      Running             coredns                   0                   be8a8f92ce56e       coredns-5644d7b6d9-qmzl6
	e327d721f3f2f       b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed   10 minutes ago      Running             etcd                      0                   8db08f325ea8f       etcd-old-k8s-version-788237
	7c8ff8ca133a1       06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d   10 minutes ago      Running             kube-controller-manager   0                   7788241941dcf       kube-controller-manager-old-k8s-version-788237
	0478c9a69e812       301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a   10 minutes ago      Running             kube-scheduler            0                   d6c1775cb397c       kube-scheduler-old-k8s-version-788237
	3f47fadd92bab       b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e   10 minutes ago      Running             kube-apiserver            1                   bb471801af694       kube-apiserver-old-k8s-version-788237
	c79c8713cf405       b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e   15 minutes ago      Exited              kube-apiserver            0                   bb471801af694       kube-apiserver-old-k8s-version-788237
	
	
	==> coredns [cd98624191993b478451414500a1a65222fd5fa43a8947d332d980468fc0a67a] <==
	.:53
	2024-01-16T03:18:55.969Z [INFO] plugin/reload: Running configuration MD5 = 73c7bdb6903c83cd433a46b2e9eb4233
	2024-01-16T03:18:55.969Z [INFO] CoreDNS-1.6.2
	2024-01-16T03:18:55.969Z [INFO] linux/amd64, go1.12.8, 795a3eb
	CoreDNS-1.6.2
	linux/amd64, go1.12.8, 795a3eb
	2024-01-16T03:18:55.996Z [INFO] 127.0.0.1:42780 - 25246 "HINFO IN 306935609111123163.5757372064153635715. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.02665838s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-788237
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-788237
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=880ec6d67bbc7fe8882aaa6543d5a07427c973fd
	                    minikube.k8s.io/name=old-k8s-version-788237
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_16T03_18_38_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Jan 2024 03:18:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Jan 2024 03:28:34 +0000   Tue, 16 Jan 2024 03:18:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Jan 2024 03:28:34 +0000   Tue, 16 Jan 2024 03:18:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Jan 2024 03:28:34 +0000   Tue, 16 Jan 2024 03:18:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Jan 2024 03:28:34 +0000   Tue, 16 Jan 2024 03:18:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.91
	  Hostname:    old-k8s-version-788237
	Capacity:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	Allocatable:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	System Info:
	 Machine ID:                 6f003db2ea7544b986d77ceb575a7aa0
	 System UUID:                6f003db2-ea75-44b9-86d7-7ceb575a7aa0
	 Boot ID:                    373fd605-6a49-4434-b320-0698ea4aaf5a
	 Kernel Version:             5.10.57
	 OS Image:                   Buildroot 2021.02.12
	 Operating System:           linux
	 Architecture:               amd64
	 Container Runtime Version:  cri-o://1.24.1
	 Kubelet Version:            v1.16.0
	 Kube-Proxy Version:         v1.16.0
	PodCIDR:                     10.244.0.0/24
	PodCIDRs:                    10.244.0.0/24
	Non-terminated Pods:         (8 in total)
	  Namespace                  Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                  ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                coredns-5644d7b6d9-qmzl6                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     10m
	  kube-system                etcd-old-k8s-version-788237                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m26s
	  kube-system                kube-apiserver-old-k8s-version-788237             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m25s
	  kube-system                kube-controller-manager-old-k8s-version-788237    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m21s
	  kube-system                kube-proxy-tv7gz                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                kube-scheduler-old-k8s-version-788237             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m36s
	  kube-system                metrics-server-74d5856cc6-tx8jt                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         10m
	  kube-system                storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                750m (37%!)(MISSING)   0 (0%!)(MISSING)
	  memory             270Mi (12%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From                                Message
	  ----    ------                   ----               ----                                -------
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)  kubelet, old-k8s-version-788237     Node old-k8s-version-788237 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet, old-k8s-version-788237     Node old-k8s-version-788237 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x7 over 10m)  kubelet, old-k8s-version-788237     Node old-k8s-version-788237 status is now: NodeHasSufficientPID
	  Normal  Starting                 10m                kube-proxy, old-k8s-version-788237  Starting kube-proxy.
	
	
	==> dmesg <==
	[Jan16 03:12] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.070635] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.539427] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[Jan16 03:13] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.153619] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000001] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.445032] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.117977] systemd-fstab-generator[639]: Ignoring "noauto" for root device
	[  +0.125274] systemd-fstab-generator[650]: Ignoring "noauto" for root device
	[  +0.169050] systemd-fstab-generator[664]: Ignoring "noauto" for root device
	[  +0.106971] systemd-fstab-generator[675]: Ignoring "noauto" for root device
	[  +0.248670] systemd-fstab-generator[699]: Ignoring "noauto" for root device
	[ +18.819809] systemd-fstab-generator[1016]: Ignoring "noauto" for root device
	[  +0.490027] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[ +26.975714] kauditd_printk_skb: 13 callbacks suppressed
	[Jan16 03:14] kauditd_printk_skb: 2 callbacks suppressed
	[Jan16 03:18] systemd-fstab-generator[3094]: Ignoring "noauto" for root device
	[  +0.779988] kauditd_printk_skb: 6 callbacks suppressed
	[Jan16 03:19] hrtimer: interrupt took 2584251 ns
	[  +1.203884] kauditd_printk_skb: 4 callbacks suppressed
	
	
	==> etcd [e327d721f3f2f3b4ab0bbb5f1278403fea05f4be2b069c149c00c4a8c8a45fff] <==
	2024-01-16 03:18:29.154981 I | raft: newRaft 3a19c1a50e8a825c [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	2024-01-16 03:18:29.154985 I | raft: 3a19c1a50e8a825c became follower at term 1
	2024-01-16 03:18:29.163581 W | auth: simple token is not cryptographically signed
	2024-01-16 03:18:29.168830 I | etcdserver: starting server... [version: 3.3.15, cluster version: to_be_decided]
	2024-01-16 03:18:29.170880 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, ca = , trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2024-01-16 03:18:29.171088 I | embed: listening for metrics on http://192.168.39.91:2381
	2024-01-16 03:18:29.171372 I | embed: listening for metrics on http://127.0.0.1:2381
	2024-01-16 03:18:29.171635 I | etcdserver: 3a19c1a50e8a825c as single-node; fast-forwarding 9 ticks (election ticks 10)
	2024-01-16 03:18:29.171988 I | etcdserver/membership: added member 3a19c1a50e8a825c [https://192.168.39.91:2380] to cluster 674de9ca81299bdc
	2024-01-16 03:18:29.955421 I | raft: 3a19c1a50e8a825c is starting a new election at term 1
	2024-01-16 03:18:29.955609 I | raft: 3a19c1a50e8a825c became candidate at term 2
	2024-01-16 03:18:29.955638 I | raft: 3a19c1a50e8a825c received MsgVoteResp from 3a19c1a50e8a825c at term 2
	2024-01-16 03:18:29.955661 I | raft: 3a19c1a50e8a825c became leader at term 2
	2024-01-16 03:18:29.955677 I | raft: raft.node: 3a19c1a50e8a825c elected leader 3a19c1a50e8a825c at term 2
	2024-01-16 03:18:29.956259 I | etcdserver: setting up the initial cluster version to 3.3
	2024-01-16 03:18:29.956679 I | etcdserver: published {Name:old-k8s-version-788237 ClientURLs:[https://192.168.39.91:2379]} to cluster 674de9ca81299bdc
	2024-01-16 03:18:29.957032 I | embed: ready to serve client requests
	2024-01-16 03:18:29.957893 N | etcdserver/membership: set the initial cluster version to 3.3
	2024-01-16 03:18:29.957975 I | etcdserver/api: enabled capabilities for version 3.3
	2024-01-16 03:18:29.958102 I | embed: ready to serve client requests
	2024-01-16 03:18:29.959253 I | embed: serving client requests on 127.0.0.1:2379
	2024-01-16 03:18:29.960674 I | embed: serving client requests on 192.168.39.91:2379
	2024-01-16 03:18:55.245295 W | etcdserver: read-only range request "key:\"/registry/deployments/kube-system/metrics-server\" " with result "range_response_count:1 size:2853" took too long (103.903871ms) to execute
	2024-01-16 03:28:30.406088 I | mvcc: store.index: compact 664
	2024-01-16 03:28:30.412600 I | mvcc: finished scheduled compaction at 664 (took 5.515886ms)
	
	
	==> kernel <==
	 03:29:07 up 16 min,  0 users,  load average: 0.45, 0.26, 0.21
	Linux old-k8s-version-788237 5.10.57 #1 SMP Thu Dec 28 22:04:21 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [3f47fadd92bab56c45d6d368728025ede6d264522c3f0cfd263543f5d68ae0e2] <==
	I0116 03:21:57.175998       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0116 03:21:57.176147       1 handler_proxy.go:99] no RequestInfo found in the context
	E0116 03:21:57.176231       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0116 03:21:57.176238       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0116 03:23:34.795618       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0116 03:23:34.795794       1 handler_proxy.go:99] no RequestInfo found in the context
	E0116 03:23:34.795934       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0116 03:23:34.795946       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0116 03:24:34.796281       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0116 03:24:34.796585       1 handler_proxy.go:99] no RequestInfo found in the context
	E0116 03:24:34.796707       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0116 03:24:34.796739       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0116 03:26:34.797165       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0116 03:26:34.797595       1 handler_proxy.go:99] no RequestInfo found in the context
	E0116 03:26:34.797703       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0116 03:26:34.797728       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0116 03:28:34.798867       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0116 03:28:34.799279       1 handler_proxy.go:99] no RequestInfo found in the context
	E0116 03:28:34.799446       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0116 03:28:34.799560       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [c79c8713cf40509eb13eda6bc933e764ae4bb84455c0a1ed89ca21d32bc667cc] <==
	W0116 03:18:22.685337       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0116 03:18:22.685285       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0116 03:18:22.685394       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0116 03:18:22.687650       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0116 03:18:22.687831       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0116 03:18:22.687884       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0116 03:18:22.687888       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0116 03:18:22.688046       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0116 03:18:22.688712       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0116 03:18:22.688781       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0116 03:18:22.688828       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0116 03:18:22.688829       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0116 03:18:22.688863       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0116 03:18:23.973382       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0116 03:18:23.974995       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0116 03:18:24.006764       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0116 03:18:24.020625       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0116 03:18:24.024026       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0116 03:18:24.039443       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0116 03:18:24.043840       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0116 03:18:24.083238       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0116 03:18:24.085915       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0116 03:18:24.089106       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0116 03:18:24.118713       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0116 03:18:24.123982       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	
	
	==> kube-controller-manager [7c8ff8ca133a1b2c457758f11318941da96fd0179f38c83dff5592244df7855b] <==
	E0116 03:22:56.067282       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0116 03:23:10.049872       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0116 03:23:26.319757       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0116 03:23:42.052978       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0116 03:23:56.572226       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0116 03:24:14.055357       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0116 03:24:26.824411       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0116 03:24:46.058398       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0116 03:24:57.076389       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0116 03:25:18.061104       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0116 03:25:27.328722       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0116 03:25:50.063344       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0116 03:25:57.581358       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0116 03:26:22.065809       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0116 03:26:27.834090       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0116 03:26:54.069391       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0116 03:26:58.086287       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0116 03:27:26.071753       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0116 03:27:28.338939       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0116 03:27:58.074221       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0116 03:27:58.591057       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	E0116 03:28:28.843601       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0116 03:28:30.076943       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0116 03:28:59.096227       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0116 03:29:02.079188       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	
	==> kube-proxy [ccddac0572d056ac2605ff0ddb55170ab09f4e0657bcb57b1a8faab6d748f1ae] <==
	W0116 03:18:56.906029       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I0116 03:18:56.917994       1 node.go:135] Successfully retrieved node IP: 192.168.39.91
	I0116 03:18:56.918059       1 server_others.go:149] Using iptables Proxier.
	I0116 03:18:56.918692       1 server.go:529] Version: v1.16.0
	I0116 03:18:56.926983       1 config.go:313] Starting service config controller
	I0116 03:18:56.927050       1 shared_informer.go:197] Waiting for caches to sync for service config
	I0116 03:18:56.927235       1 config.go:131] Starting endpoints config controller
	I0116 03:18:56.927250       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I0116 03:18:57.030706       1 shared_informer.go:204] Caches are synced for endpoints config 
	I0116 03:18:57.030814       1 shared_informer.go:204] Caches are synced for service config 
	
	
	==> kube-scheduler [0478c9a69e8126feecf553fe27b1082236f91220b1da2fa67d23a45755014ee9] <==
	I0116 03:18:33.809936       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
	I0116 03:18:33.810798       1 secure_serving.go:123] Serving securely on 127.0.0.1:10259
	E0116 03:18:33.845650       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0116 03:18:33.862258       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0116 03:18:33.862431       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0116 03:18:33.863518       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0116 03:18:33.863638       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0116 03:18:33.865597       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0116 03:18:33.865680       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0116 03:18:33.865713       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0116 03:18:33.865754       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0116 03:18:33.865784       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0116 03:18:33.866803       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0116 03:18:34.847580       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0116 03:18:34.864577       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0116 03:18:34.868137       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0116 03:18:34.873391       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0116 03:18:34.877735       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0116 03:18:34.881272       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0116 03:18:34.883349       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0116 03:18:34.886121       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0116 03:18:34.887661       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0116 03:18:34.890590       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0116 03:18:34.891927       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0116 03:18:53.634997       1 factory.go:585] pod is already present in the activeQ
	
	
	==> kubelet <==
	-- Journal begins at Tue 2024-01-16 03:13:01 UTC, ends at Tue 2024-01-16 03:29:08 UTC. --
	Jan 16 03:24:28 old-k8s-version-788237 kubelet[3100]: E0116 03:24:28.158731    3100 pod_workers.go:191] Error syncing pod 790d860a-d27f-4535-94d8-64f40cb79071 ("metrics-server-74d5856cc6-tx8jt_kube-system(790d860a-d27f-4535-94d8-64f40cb79071)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 16 03:24:41 old-k8s-version-788237 kubelet[3100]: E0116 03:24:41.161788    3100 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jan 16 03:24:41 old-k8s-version-788237 kubelet[3100]: E0116 03:24:41.161875    3100 kuberuntime_image.go:50] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jan 16 03:24:41 old-k8s-version-788237 kubelet[3100]: E0116 03:24:41.161937    3100 kuberuntime_manager.go:783] container start failed: ErrImagePull: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jan 16 03:24:41 old-k8s-version-788237 kubelet[3100]: E0116 03:24:41.161972    3100 pod_workers.go:191] Error syncing pod 790d860a-d27f-4535-94d8-64f40cb79071 ("metrics-server-74d5856cc6-tx8jt_kube-system(790d860a-d27f-4535-94d8-64f40cb79071)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	Jan 16 03:24:55 old-k8s-version-788237 kubelet[3100]: E0116 03:24:55.150292    3100 pod_workers.go:191] Error syncing pod 790d860a-d27f-4535-94d8-64f40cb79071 ("metrics-server-74d5856cc6-tx8jt_kube-system(790d860a-d27f-4535-94d8-64f40cb79071)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 16 03:25:09 old-k8s-version-788237 kubelet[3100]: E0116 03:25:09.150817    3100 pod_workers.go:191] Error syncing pod 790d860a-d27f-4535-94d8-64f40cb79071 ("metrics-server-74d5856cc6-tx8jt_kube-system(790d860a-d27f-4535-94d8-64f40cb79071)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 16 03:25:22 old-k8s-version-788237 kubelet[3100]: E0116 03:25:22.150620    3100 pod_workers.go:191] Error syncing pod 790d860a-d27f-4535-94d8-64f40cb79071 ("metrics-server-74d5856cc6-tx8jt_kube-system(790d860a-d27f-4535-94d8-64f40cb79071)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 16 03:25:37 old-k8s-version-788237 kubelet[3100]: E0116 03:25:37.150276    3100 pod_workers.go:191] Error syncing pod 790d860a-d27f-4535-94d8-64f40cb79071 ("metrics-server-74d5856cc6-tx8jt_kube-system(790d860a-d27f-4535-94d8-64f40cb79071)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 16 03:25:49 old-k8s-version-788237 kubelet[3100]: E0116 03:25:49.150060    3100 pod_workers.go:191] Error syncing pod 790d860a-d27f-4535-94d8-64f40cb79071 ("metrics-server-74d5856cc6-tx8jt_kube-system(790d860a-d27f-4535-94d8-64f40cb79071)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 16 03:26:01 old-k8s-version-788237 kubelet[3100]: E0116 03:26:01.150225    3100 pod_workers.go:191] Error syncing pod 790d860a-d27f-4535-94d8-64f40cb79071 ("metrics-server-74d5856cc6-tx8jt_kube-system(790d860a-d27f-4535-94d8-64f40cb79071)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 16 03:26:14 old-k8s-version-788237 kubelet[3100]: E0116 03:26:14.150057    3100 pod_workers.go:191] Error syncing pod 790d860a-d27f-4535-94d8-64f40cb79071 ("metrics-server-74d5856cc6-tx8jt_kube-system(790d860a-d27f-4535-94d8-64f40cb79071)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 16 03:26:29 old-k8s-version-788237 kubelet[3100]: E0116 03:26:29.150153    3100 pod_workers.go:191] Error syncing pod 790d860a-d27f-4535-94d8-64f40cb79071 ("metrics-server-74d5856cc6-tx8jt_kube-system(790d860a-d27f-4535-94d8-64f40cb79071)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 16 03:26:41 old-k8s-version-788237 kubelet[3100]: E0116 03:26:41.150808    3100 pod_workers.go:191] Error syncing pod 790d860a-d27f-4535-94d8-64f40cb79071 ("metrics-server-74d5856cc6-tx8jt_kube-system(790d860a-d27f-4535-94d8-64f40cb79071)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 16 03:26:56 old-k8s-version-788237 kubelet[3100]: E0116 03:26:56.150979    3100 pod_workers.go:191] Error syncing pod 790d860a-d27f-4535-94d8-64f40cb79071 ("metrics-server-74d5856cc6-tx8jt_kube-system(790d860a-d27f-4535-94d8-64f40cb79071)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 16 03:27:11 old-k8s-version-788237 kubelet[3100]: E0116 03:27:11.150541    3100 pod_workers.go:191] Error syncing pod 790d860a-d27f-4535-94d8-64f40cb79071 ("metrics-server-74d5856cc6-tx8jt_kube-system(790d860a-d27f-4535-94d8-64f40cb79071)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 16 03:27:25 old-k8s-version-788237 kubelet[3100]: E0116 03:27:25.150289    3100 pod_workers.go:191] Error syncing pod 790d860a-d27f-4535-94d8-64f40cb79071 ("metrics-server-74d5856cc6-tx8jt_kube-system(790d860a-d27f-4535-94d8-64f40cb79071)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 16 03:27:38 old-k8s-version-788237 kubelet[3100]: E0116 03:27:38.150038    3100 pod_workers.go:191] Error syncing pod 790d860a-d27f-4535-94d8-64f40cb79071 ("metrics-server-74d5856cc6-tx8jt_kube-system(790d860a-d27f-4535-94d8-64f40cb79071)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 16 03:27:51 old-k8s-version-788237 kubelet[3100]: E0116 03:27:51.150845    3100 pod_workers.go:191] Error syncing pod 790d860a-d27f-4535-94d8-64f40cb79071 ("metrics-server-74d5856cc6-tx8jt_kube-system(790d860a-d27f-4535-94d8-64f40cb79071)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 16 03:28:06 old-k8s-version-788237 kubelet[3100]: E0116 03:28:06.151000    3100 pod_workers.go:191] Error syncing pod 790d860a-d27f-4535-94d8-64f40cb79071 ("metrics-server-74d5856cc6-tx8jt_kube-system(790d860a-d27f-4535-94d8-64f40cb79071)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 16 03:28:17 old-k8s-version-788237 kubelet[3100]: E0116 03:28:17.150637    3100 pod_workers.go:191] Error syncing pod 790d860a-d27f-4535-94d8-64f40cb79071 ("metrics-server-74d5856cc6-tx8jt_kube-system(790d860a-d27f-4535-94d8-64f40cb79071)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 16 03:28:26 old-k8s-version-788237 kubelet[3100]: E0116 03:28:26.317778    3100 container_manager_linux.go:510] failed to find cgroups of kubelet - cpu and memory cgroup hierarchy not unified.  cpu: /, memory: /system.slice/kubelet.service
	Jan 16 03:28:28 old-k8s-version-788237 kubelet[3100]: E0116 03:28:28.150416    3100 pod_workers.go:191] Error syncing pod 790d860a-d27f-4535-94d8-64f40cb79071 ("metrics-server-74d5856cc6-tx8jt_kube-system(790d860a-d27f-4535-94d8-64f40cb79071)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 16 03:28:43 old-k8s-version-788237 kubelet[3100]: E0116 03:28:43.150394    3100 pod_workers.go:191] Error syncing pod 790d860a-d27f-4535-94d8-64f40cb79071 ("metrics-server-74d5856cc6-tx8jt_kube-system(790d860a-d27f-4535-94d8-64f40cb79071)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 16 03:28:58 old-k8s-version-788237 kubelet[3100]: E0116 03:28:58.150381    3100 pod_workers.go:191] Error syncing pod 790d860a-d27f-4535-94d8-64f40cb79071 ("metrics-server-74d5856cc6-tx8jt_kube-system(790d860a-d27f-4535-94d8-64f40cb79071)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	
	
	==> storage-provisioner [39f3d7fe5482f24d0405541c570378b116cdad4537f37ecd47a184e266678440] <==
	I0116 03:18:57.180332       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0116 03:18:57.193905       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0116 03:18:57.193979       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0116 03:18:57.203549       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0116 03:18:57.203725       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-788237_9dbd0fef-2950-40a3-bfce-0a7c3322bd4e!
	I0116 03:18:57.207432       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a352f12e-5d84-4668-bd31-56150fefa2b8", APIVersion:"v1", ResourceVersion:"421", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-788237_9dbd0fef-2950-40a3-bfce-0a7c3322bd4e became leader
	I0116 03:18:57.304411       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-788237_9dbd0fef-2950-40a3-bfce-0a7c3322bd4e!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-788237 -n old-k8s-version-788237
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-788237 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-74d5856cc6-tx8jt
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-788237 describe pod metrics-server-74d5856cc6-tx8jt
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-788237 describe pod metrics-server-74d5856cc6-tx8jt: exit status 1 (68.282544ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-74d5856cc6-tx8jt" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-788237 describe pod metrics-server-74d5856cc6-tx8jt: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.63s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (542.71s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0116 03:23:50.560009  978482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/functional-941139/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-775571 -n default-k8s-diff-port-775571
start_stop_delete_test.go:274: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-01-16 03:32:32.516759743 +0000 UTC m=+5530.400586314
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-775571 -n default-k8s-diff-port-775571
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-775571 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-775571 logs -n 25: (1.53854519s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| pause   | -p pause-100619                                        | pause-100619                 | jenkins | v1.32.0 | 16 Jan 24 03:03 UTC | 16 Jan 24 03:03 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| unpause | -p pause-100619                                        | pause-100619                 | jenkins | v1.32.0 | 16 Jan 24 03:03 UTC | 16 Jan 24 03:03 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| pause   | -p pause-100619                                        | pause-100619                 | jenkins | v1.32.0 | 16 Jan 24 03:03 UTC | 16 Jan 24 03:03 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| delete  | -p pause-100619                                        | pause-100619                 | jenkins | v1.32.0 | 16 Jan 24 03:03 UTC | 16 Jan 24 03:03 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| delete  | -p pause-100619                                        | pause-100619                 | jenkins | v1.32.0 | 16 Jan 24 03:03 UTC | 16 Jan 24 03:03 UTC |
	| delete  | -p                                                     | disable-driver-mounts-807979 | jenkins | v1.32.0 | 16 Jan 24 03:03 UTC | 16 Jan 24 03:03 UTC |
	|         | disable-driver-mounts-807979                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-775571 | jenkins | v1.32.0 | 16 Jan 24 03:03 UTC | 16 Jan 24 03:06 UTC |
	|         | default-k8s-diff-port-775571                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-934668             | no-preload-934668            | jenkins | v1.32.0 | 16 Jan 24 03:05 UTC | 16 Jan 24 03:05 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-934668                                   | no-preload-934668            | jenkins | v1.32.0 | 16 Jan 24 03:05 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-480663            | embed-certs-480663           | jenkins | v1.32.0 | 16 Jan 24 03:05 UTC | 16 Jan 24 03:05 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-480663                                  | embed-certs-480663           | jenkins | v1.32.0 | 16 Jan 24 03:05 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-788237        | old-k8s-version-788237       | jenkins | v1.32.0 | 16 Jan 24 03:05 UTC | 16 Jan 24 03:05 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-788237                              | old-k8s-version-788237       | jenkins | v1.32.0 | 16 Jan 24 03:05 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-775571  | default-k8s-diff-port-775571 | jenkins | v1.32.0 | 16 Jan 24 03:06 UTC | 16 Jan 24 03:06 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-775571 | jenkins | v1.32.0 | 16 Jan 24 03:06 UTC |                     |
	|         | default-k8s-diff-port-775571                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-934668                  | no-preload-934668            | jenkins | v1.32.0 | 16 Jan 24 03:07 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-480663                 | embed-certs-480663           | jenkins | v1.32.0 | 16 Jan 24 03:07 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-934668                                   | no-preload-934668            | jenkins | v1.32.0 | 16 Jan 24 03:07 UTC | 16 Jan 24 03:24 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| start   | -p embed-certs-480663                                  | embed-certs-480663           | jenkins | v1.32.0 | 16 Jan 24 03:07 UTC | 16 Jan 24 03:17 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-788237             | old-k8s-version-788237       | jenkins | v1.32.0 | 16 Jan 24 03:08 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-788237                              | old-k8s-version-788237       | jenkins | v1.32.0 | 16 Jan 24 03:08 UTC | 16 Jan 24 03:20 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-775571       | default-k8s-diff-port-775571 | jenkins | v1.32.0 | 16 Jan 24 03:08 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-775571 | jenkins | v1.32.0 | 16 Jan 24 03:08 UTC | 16 Jan 24 03:23 UTC |
	|         | default-k8s-diff-port-775571                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-788237                              | old-k8s-version-788237       | jenkins | v1.32.0 | 16 Jan 24 03:32 UTC | 16 Jan 24 03:32 UTC |
	| start   | -p newest-cni-190843 --memory=2200 --alsologtostderr   | newest-cni-190843            | jenkins | v1.32.0 | 16 Jan 24 03:32 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/16 03:32:05
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0116 03:32:05.212846 1016909 out.go:296] Setting OutFile to fd 1 ...
	I0116 03:32:05.213127 1016909 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 03:32:05.213137 1016909 out.go:309] Setting ErrFile to fd 2...
	I0116 03:32:05.213142 1016909 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 03:32:05.213327 1016909 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17967-971255/.minikube/bin
	I0116 03:32:05.214010 1016909 out.go:303] Setting JSON to false
	I0116 03:32:05.215295 1016909 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":15275,"bootTime":1705360651,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1048-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0116 03:32:05.215360 1016909 start.go:138] virtualization: kvm guest
	I0116 03:32:05.218234 1016909 out.go:177] * [newest-cni-190843] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0116 03:32:05.219856 1016909 out.go:177]   - MINIKUBE_LOCATION=17967
	I0116 03:32:05.219856 1016909 notify.go:220] Checking for updates...
	I0116 03:32:05.223126 1016909 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0116 03:32:05.225146 1016909 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17967-971255/kubeconfig
	I0116 03:32:05.226713 1016909 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17967-971255/.minikube
	I0116 03:32:05.228305 1016909 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0116 03:32:05.229952 1016909 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0116 03:32:05.232014 1016909 config.go:182] Loaded profile config "default-k8s-diff-port-775571": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 03:32:05.232132 1016909 config.go:182] Loaded profile config "embed-certs-480663": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 03:32:05.232253 1016909 config.go:182] Loaded profile config "no-preload-934668": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0116 03:32:05.232506 1016909 driver.go:392] Setting default libvirt URI to qemu:///system
	I0116 03:32:05.273796 1016909 out.go:177] * Using the kvm2 driver based on user configuration
	I0116 03:32:05.275294 1016909 start.go:298] selected driver: kvm2
	I0116 03:32:05.275312 1016909 start.go:902] validating driver "kvm2" against <nil>
	I0116 03:32:05.275368 1016909 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0116 03:32:05.276237 1016909 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0116 03:32:05.276324 1016909 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17967-971255/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0116 03:32:05.292907 1016909 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0116 03:32:05.292964 1016909 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	W0116 03:32:05.293049 1016909 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0116 03:32:05.293316 1016909 start_flags.go:946] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0116 03:32:05.293389 1016909 cni.go:84] Creating CNI manager for ""
	I0116 03:32:05.293405 1016909 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 03:32:05.293417 1016909 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0116 03:32:05.293426 1016909 start_flags.go:321] config:
	{Name:newest-cni-190843 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-190843 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 03:32:05.293661 1016909 iso.go:125] acquiring lock: {Name:mkc0ce0b4b435c5fb7570521b794e5982bb7bf6d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0116 03:32:05.295893 1016909 out.go:177] * Starting control plane node newest-cni-190843 in cluster newest-cni-190843
	I0116 03:32:05.297239 1016909 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0116 03:32:05.297285 1016909 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17967-971255/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4
	I0116 03:32:05.297295 1016909 cache.go:56] Caching tarball of preloaded images
	I0116 03:32:05.297432 1016909 preload.go:174] Found /home/jenkins/minikube-integration/17967-971255/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0116 03:32:05.297447 1016909 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.2 on crio
	I0116 03:32:05.297603 1016909 profile.go:148] Saving config to /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/newest-cni-190843/config.json ...
	I0116 03:32:05.297631 1016909 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/newest-cni-190843/config.json: {Name:mk7d619c3c7ca0d35f7ef7967861c85d73a75388 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:32:05.297892 1016909 start.go:365] acquiring machines lock for newest-cni-190843: {Name:mk61734672272adcae1bdaf20e72828e69d219db Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0116 03:32:05.297959 1016909 start.go:369] acquired machines lock for "newest-cni-190843" in 41.247µs
	I0116 03:32:05.297992 1016909 start.go:93] Provisioning new machine with config: &{Name:newest-cni-190843 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-190843 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0116 03:32:05.298108 1016909 start.go:125] createHost starting for "" (driver="kvm2")
	I0116 03:32:05.299905 1016909 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0116 03:32:05.300049 1016909 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:32:05.300094 1016909 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:32:05.316399 1016909 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44549
	I0116 03:32:05.317031 1016909 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:32:05.317606 1016909 main.go:141] libmachine: Using API Version  1
	I0116 03:32:05.317632 1016909 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:32:05.318081 1016909 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:32:05.318297 1016909 main.go:141] libmachine: (newest-cni-190843) Calling .GetMachineName
	I0116 03:32:05.318486 1016909 main.go:141] libmachine: (newest-cni-190843) Calling .DriverName
	I0116 03:32:05.318656 1016909 start.go:159] libmachine.API.Create for "newest-cni-190843" (driver="kvm2")
	I0116 03:32:05.318690 1016909 client.go:168] LocalClient.Create starting
	I0116 03:32:05.318739 1016909 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca.pem
	I0116 03:32:05.318781 1016909 main.go:141] libmachine: Decoding PEM data...
	I0116 03:32:05.318806 1016909 main.go:141] libmachine: Parsing certificate...
	I0116 03:32:05.318882 1016909 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17967-971255/.minikube/certs/cert.pem
	I0116 03:32:05.318903 1016909 main.go:141] libmachine: Decoding PEM data...
	I0116 03:32:05.318915 1016909 main.go:141] libmachine: Parsing certificate...
	I0116 03:32:05.318934 1016909 main.go:141] libmachine: Running pre-create checks...
	I0116 03:32:05.318944 1016909 main.go:141] libmachine: (newest-cni-190843) Calling .PreCreateCheck
	I0116 03:32:05.319384 1016909 main.go:141] libmachine: (newest-cni-190843) Calling .GetConfigRaw
	I0116 03:32:05.319892 1016909 main.go:141] libmachine: Creating machine...
	I0116 03:32:05.319907 1016909 main.go:141] libmachine: (newest-cni-190843) Calling .Create
	I0116 03:32:05.320070 1016909 main.go:141] libmachine: (newest-cni-190843) Creating KVM machine...
	I0116 03:32:05.321609 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | found existing default KVM network
	I0116 03:32:05.323542 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | I0116 03:32:05.323355 1016931 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000147f10}
	I0116 03:32:05.330248 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | trying to create private KVM network mk-newest-cni-190843 192.168.39.0/24...
	I0116 03:32:05.415343 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | private KVM network mk-newest-cni-190843 192.168.39.0/24 created
	I0116 03:32:05.415384 1016909 main.go:141] libmachine: (newest-cni-190843) Setting up store path in /home/jenkins/minikube-integration/17967-971255/.minikube/machines/newest-cni-190843 ...
	I0116 03:32:05.415412 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | I0116 03:32:05.415316 1016931 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17967-971255/.minikube
	I0116 03:32:05.415434 1016909 main.go:141] libmachine: (newest-cni-190843) Building disk image from file:///home/jenkins/minikube-integration/17967-971255/.minikube/cache/iso/amd64/minikube-v1.32.1-1703784139-17866-amd64.iso
	I0116 03:32:05.415632 1016909 main.go:141] libmachine: (newest-cni-190843) Downloading /home/jenkins/minikube-integration/17967-971255/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17967-971255/.minikube/cache/iso/amd64/minikube-v1.32.1-1703784139-17866-amd64.iso...
	I0116 03:32:05.668581 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | I0116 03:32:05.668448 1016931 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17967-971255/.minikube/machines/newest-cni-190843/id_rsa...
	I0116 03:32:05.751163 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | I0116 03:32:05.751025 1016931 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17967-971255/.minikube/machines/newest-cni-190843/newest-cni-190843.rawdisk...
	I0116 03:32:05.751206 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | Writing magic tar header
	I0116 03:32:05.751232 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | Writing SSH key tar header
	I0116 03:32:05.751393 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | I0116 03:32:05.751314 1016931 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17967-971255/.minikube/machines/newest-cni-190843 ...
	I0116 03:32:05.751491 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17967-971255/.minikube/machines/newest-cni-190843
	I0116 03:32:05.751527 1016909 main.go:141] libmachine: (newest-cni-190843) Setting executable bit set on /home/jenkins/minikube-integration/17967-971255/.minikube/machines/newest-cni-190843 (perms=drwx------)
	I0116 03:32:05.751544 1016909 main.go:141] libmachine: (newest-cni-190843) Setting executable bit set on /home/jenkins/minikube-integration/17967-971255/.minikube/machines (perms=drwxr-xr-x)
	I0116 03:32:05.751557 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17967-971255/.minikube/machines
	I0116 03:32:05.751594 1016909 main.go:141] libmachine: (newest-cni-190843) Setting executable bit set on /home/jenkins/minikube-integration/17967-971255/.minikube (perms=drwxr-xr-x)
	I0116 03:32:05.751621 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17967-971255/.minikube
	I0116 03:32:05.751636 1016909 main.go:141] libmachine: (newest-cni-190843) Setting executable bit set on /home/jenkins/minikube-integration/17967-971255 (perms=drwxrwxr-x)
	I0116 03:32:05.751652 1016909 main.go:141] libmachine: (newest-cni-190843) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0116 03:32:05.751669 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17967-971255
	I0116 03:32:05.751712 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0116 03:32:05.751727 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | Checking permissions on dir: /home/jenkins
	I0116 03:32:05.751737 1016909 main.go:141] libmachine: (newest-cni-190843) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0116 03:32:05.751748 1016909 main.go:141] libmachine: (newest-cni-190843) Creating domain...
	I0116 03:32:05.751760 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | Checking permissions on dir: /home
	I0116 03:32:05.751772 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | Skipping /home - not owner
	I0116 03:32:05.753038 1016909 main.go:141] libmachine: (newest-cni-190843) define libvirt domain using xml: 
	I0116 03:32:05.753063 1016909 main.go:141] libmachine: (newest-cni-190843) <domain type='kvm'>
	I0116 03:32:05.753074 1016909 main.go:141] libmachine: (newest-cni-190843)   <name>newest-cni-190843</name>
	I0116 03:32:05.753086 1016909 main.go:141] libmachine: (newest-cni-190843)   <memory unit='MiB'>2200</memory>
	I0116 03:32:05.753096 1016909 main.go:141] libmachine: (newest-cni-190843)   <vcpu>2</vcpu>
	I0116 03:32:05.753109 1016909 main.go:141] libmachine: (newest-cni-190843)   <features>
	I0116 03:32:05.753122 1016909 main.go:141] libmachine: (newest-cni-190843)     <acpi/>
	I0116 03:32:05.753142 1016909 main.go:141] libmachine: (newest-cni-190843)     <apic/>
	I0116 03:32:05.753158 1016909 main.go:141] libmachine: (newest-cni-190843)     <pae/>
	I0116 03:32:05.753174 1016909 main.go:141] libmachine: (newest-cni-190843)     
	I0116 03:32:05.753189 1016909 main.go:141] libmachine: (newest-cni-190843)   </features>
	I0116 03:32:05.753201 1016909 main.go:141] libmachine: (newest-cni-190843)   <cpu mode='host-passthrough'>
	I0116 03:32:05.753216 1016909 main.go:141] libmachine: (newest-cni-190843)   
	I0116 03:32:05.753228 1016909 main.go:141] libmachine: (newest-cni-190843)   </cpu>
	I0116 03:32:05.753250 1016909 main.go:141] libmachine: (newest-cni-190843)   <os>
	I0116 03:32:05.753280 1016909 main.go:141] libmachine: (newest-cni-190843)     <type>hvm</type>
	I0116 03:32:05.753291 1016909 main.go:141] libmachine: (newest-cni-190843)     <boot dev='cdrom'/>
	I0116 03:32:05.753299 1016909 main.go:141] libmachine: (newest-cni-190843)     <boot dev='hd'/>
	I0116 03:32:05.753306 1016909 main.go:141] libmachine: (newest-cni-190843)     <bootmenu enable='no'/>
	I0116 03:32:05.753314 1016909 main.go:141] libmachine: (newest-cni-190843)   </os>
	I0116 03:32:05.753320 1016909 main.go:141] libmachine: (newest-cni-190843)   <devices>
	I0116 03:32:05.753329 1016909 main.go:141] libmachine: (newest-cni-190843)     <disk type='file' device='cdrom'>
	I0116 03:32:05.753339 1016909 main.go:141] libmachine: (newest-cni-190843)       <source file='/home/jenkins/minikube-integration/17967-971255/.minikube/machines/newest-cni-190843/boot2docker.iso'/>
	I0116 03:32:05.753348 1016909 main.go:141] libmachine: (newest-cni-190843)       <target dev='hdc' bus='scsi'/>
	I0116 03:32:05.753376 1016909 main.go:141] libmachine: (newest-cni-190843)       <readonly/>
	I0116 03:32:05.753394 1016909 main.go:141] libmachine: (newest-cni-190843)     </disk>
	I0116 03:32:05.753402 1016909 main.go:141] libmachine: (newest-cni-190843)     <disk type='file' device='disk'>
	I0116 03:32:05.753409 1016909 main.go:141] libmachine: (newest-cni-190843)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0116 03:32:05.753429 1016909 main.go:141] libmachine: (newest-cni-190843)       <source file='/home/jenkins/minikube-integration/17967-971255/.minikube/machines/newest-cni-190843/newest-cni-190843.rawdisk'/>
	I0116 03:32:05.753437 1016909 main.go:141] libmachine: (newest-cni-190843)       <target dev='hda' bus='virtio'/>
	I0116 03:32:05.753444 1016909 main.go:141] libmachine: (newest-cni-190843)     </disk>
	I0116 03:32:05.753452 1016909 main.go:141] libmachine: (newest-cni-190843)     <interface type='network'>
	I0116 03:32:05.753459 1016909 main.go:141] libmachine: (newest-cni-190843)       <source network='mk-newest-cni-190843'/>
	I0116 03:32:05.753467 1016909 main.go:141] libmachine: (newest-cni-190843)       <model type='virtio'/>
	I0116 03:32:05.753473 1016909 main.go:141] libmachine: (newest-cni-190843)     </interface>
	I0116 03:32:05.753483 1016909 main.go:141] libmachine: (newest-cni-190843)     <interface type='network'>
	I0116 03:32:05.753492 1016909 main.go:141] libmachine: (newest-cni-190843)       <source network='default'/>
	I0116 03:32:05.753498 1016909 main.go:141] libmachine: (newest-cni-190843)       <model type='virtio'/>
	I0116 03:32:05.753507 1016909 main.go:141] libmachine: (newest-cni-190843)     </interface>
	I0116 03:32:05.753512 1016909 main.go:141] libmachine: (newest-cni-190843)     <serial type='pty'>
	I0116 03:32:05.753521 1016909 main.go:141] libmachine: (newest-cni-190843)       <target port='0'/>
	I0116 03:32:05.753527 1016909 main.go:141] libmachine: (newest-cni-190843)     </serial>
	I0116 03:32:05.753535 1016909 main.go:141] libmachine: (newest-cni-190843)     <console type='pty'>
	I0116 03:32:05.753541 1016909 main.go:141] libmachine: (newest-cni-190843)       <target type='serial' port='0'/>
	I0116 03:32:05.753549 1016909 main.go:141] libmachine: (newest-cni-190843)     </console>
	I0116 03:32:05.753558 1016909 main.go:141] libmachine: (newest-cni-190843)     <rng model='virtio'>
	I0116 03:32:05.753576 1016909 main.go:141] libmachine: (newest-cni-190843)       <backend model='random'>/dev/random</backend>
	I0116 03:32:05.753584 1016909 main.go:141] libmachine: (newest-cni-190843)     </rng>
	I0116 03:32:05.753590 1016909 main.go:141] libmachine: (newest-cni-190843)     
	I0116 03:32:05.753595 1016909 main.go:141] libmachine: (newest-cni-190843)     
	I0116 03:32:05.753604 1016909 main.go:141] libmachine: (newest-cni-190843)   </devices>
	I0116 03:32:05.753609 1016909 main.go:141] libmachine: (newest-cni-190843) </domain>
	I0116 03:32:05.753619 1016909 main.go:141] libmachine: (newest-cni-190843) 
	I0116 03:32:05.758676 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | domain newest-cni-190843 has defined MAC address 52:54:00:ed:fa:1d in network default
	I0116 03:32:05.759420 1016909 main.go:141] libmachine: (newest-cni-190843) Ensuring networks are active...
	I0116 03:32:05.759449 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | domain newest-cni-190843 has defined MAC address 52:54:00:b0:40:c6 in network mk-newest-cni-190843
	I0116 03:32:05.760711 1016909 main.go:141] libmachine: (newest-cni-190843) Ensuring network default is active
	I0116 03:32:05.761076 1016909 main.go:141] libmachine: (newest-cni-190843) Ensuring network mk-newest-cni-190843 is active
	I0116 03:32:05.761857 1016909 main.go:141] libmachine: (newest-cni-190843) Getting domain xml...
	I0116 03:32:05.762956 1016909 main.go:141] libmachine: (newest-cni-190843) Creating domain...
	I0116 03:32:07.062896 1016909 main.go:141] libmachine: (newest-cni-190843) Waiting to get IP...
	I0116 03:32:07.063781 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | domain newest-cni-190843 has defined MAC address 52:54:00:b0:40:c6 in network mk-newest-cni-190843
	I0116 03:32:07.064415 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | unable to find current IP address of domain newest-cni-190843 in network mk-newest-cni-190843
	I0116 03:32:07.064542 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | I0116 03:32:07.064443 1016931 retry.go:31] will retry after 251.904581ms: waiting for machine to come up
	I0116 03:32:07.318111 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | domain newest-cni-190843 has defined MAC address 52:54:00:b0:40:c6 in network mk-newest-cni-190843
	I0116 03:32:07.318712 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | unable to find current IP address of domain newest-cni-190843 in network mk-newest-cni-190843
	I0116 03:32:07.318749 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | I0116 03:32:07.318683 1016931 retry.go:31] will retry after 236.499042ms: waiting for machine to come up
	I0116 03:32:07.557170 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | domain newest-cni-190843 has defined MAC address 52:54:00:b0:40:c6 in network mk-newest-cni-190843
	I0116 03:32:07.557684 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | unable to find current IP address of domain newest-cni-190843 in network mk-newest-cni-190843
	I0116 03:32:07.557720 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | I0116 03:32:07.557632 1016931 retry.go:31] will retry after 394.353611ms: waiting for machine to come up
	I0116 03:32:07.954083 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | domain newest-cni-190843 has defined MAC address 52:54:00:b0:40:c6 in network mk-newest-cni-190843
	I0116 03:32:07.954680 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | unable to find current IP address of domain newest-cni-190843 in network mk-newest-cni-190843
	I0116 03:32:07.954723 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | I0116 03:32:07.954651 1016931 retry.go:31] will retry after 574.464873ms: waiting for machine to come up
	I0116 03:32:08.530742 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | domain newest-cni-190843 has defined MAC address 52:54:00:b0:40:c6 in network mk-newest-cni-190843
	I0116 03:32:08.531295 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | unable to find current IP address of domain newest-cni-190843 in network mk-newest-cni-190843
	I0116 03:32:08.531327 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | I0116 03:32:08.531239 1016931 retry.go:31] will retry after 698.276796ms: waiting for machine to come up
	I0116 03:32:09.231168 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | domain newest-cni-190843 has defined MAC address 52:54:00:b0:40:c6 in network mk-newest-cni-190843
	I0116 03:32:09.231722 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | unable to find current IP address of domain newest-cni-190843 in network mk-newest-cni-190843
	I0116 03:32:09.231752 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | I0116 03:32:09.231668 1016931 retry.go:31] will retry after 853.173834ms: waiting for machine to come up
	I0116 03:32:10.086842 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | domain newest-cni-190843 has defined MAC address 52:54:00:b0:40:c6 in network mk-newest-cni-190843
	I0116 03:32:10.087297 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | unable to find current IP address of domain newest-cni-190843 in network mk-newest-cni-190843
	I0116 03:32:10.087328 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | I0116 03:32:10.087242 1016931 retry.go:31] will retry after 1.154956983s: waiting for machine to come up
	I0116 03:32:11.244096 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | domain newest-cni-190843 has defined MAC address 52:54:00:b0:40:c6 in network mk-newest-cni-190843
	I0116 03:32:11.244636 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | unable to find current IP address of domain newest-cni-190843 in network mk-newest-cni-190843
	I0116 03:32:11.244662 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | I0116 03:32:11.244578 1016931 retry.go:31] will retry after 1.483844581s: waiting for machine to come up
	I0116 03:32:12.730593 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | domain newest-cni-190843 has defined MAC address 52:54:00:b0:40:c6 in network mk-newest-cni-190843
	I0116 03:32:12.731160 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | unable to find current IP address of domain newest-cni-190843 in network mk-newest-cni-190843
	I0116 03:32:12.731201 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | I0116 03:32:12.731083 1016931 retry.go:31] will retry after 1.775480625s: waiting for machine to come up
	I0116 03:32:14.508060 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | domain newest-cni-190843 has defined MAC address 52:54:00:b0:40:c6 in network mk-newest-cni-190843
	I0116 03:32:14.508672 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | unable to find current IP address of domain newest-cni-190843 in network mk-newest-cni-190843
	I0116 03:32:14.508702 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | I0116 03:32:14.508597 1016931 retry.go:31] will retry after 1.937489177s: waiting for machine to come up
	I0116 03:32:16.447960 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | domain newest-cni-190843 has defined MAC address 52:54:00:b0:40:c6 in network mk-newest-cni-190843
	I0116 03:32:16.448548 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | unable to find current IP address of domain newest-cni-190843 in network mk-newest-cni-190843
	I0116 03:32:16.448586 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | I0116 03:32:16.448497 1016931 retry.go:31] will retry after 2.201805443s: waiting for machine to come up
	I0116 03:32:18.652247 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | domain newest-cni-190843 has defined MAC address 52:54:00:b0:40:c6 in network mk-newest-cni-190843
	I0116 03:32:18.652732 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | unable to find current IP address of domain newest-cni-190843 in network mk-newest-cni-190843
	I0116 03:32:18.652766 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | I0116 03:32:18.652652 1016931 retry.go:31] will retry after 2.227539038s: waiting for machine to come up
	I0116 03:32:20.882300 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | domain newest-cni-190843 has defined MAC address 52:54:00:b0:40:c6 in network mk-newest-cni-190843
	I0116 03:32:20.882790 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | unable to find current IP address of domain newest-cni-190843 in network mk-newest-cni-190843
	I0116 03:32:20.882821 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | I0116 03:32:20.882749 1016931 retry.go:31] will retry after 3.22967887s: waiting for machine to come up
	I0116 03:32:24.114862 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | domain newest-cni-190843 has defined MAC address 52:54:00:b0:40:c6 in network mk-newest-cni-190843
	I0116 03:32:24.115394 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | unable to find current IP address of domain newest-cni-190843 in network mk-newest-cni-190843
	I0116 03:32:24.115425 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | I0116 03:32:24.115332 1016931 retry.go:31] will retry after 5.229877584s: waiting for machine to come up
	I0116 03:32:29.348212 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | domain newest-cni-190843 has defined MAC address 52:54:00:b0:40:c6 in network mk-newest-cni-190843
	I0116 03:32:29.348804 1016909 main.go:141] libmachine: (newest-cni-190843) Found IP for machine: 192.168.39.3
	I0116 03:32:29.348825 1016909 main.go:141] libmachine: (newest-cni-190843) Reserving static IP address...
	I0116 03:32:29.348840 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | domain newest-cni-190843 has current primary IP address 192.168.39.3 and MAC address 52:54:00:b0:40:c6 in network mk-newest-cni-190843
	I0116 03:32:29.349177 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | unable to find host DHCP lease matching {name: "newest-cni-190843", mac: "52:54:00:b0:40:c6", ip: "192.168.39.3"} in network mk-newest-cni-190843
	I0116 03:32:29.435959 1016909 main.go:141] libmachine: (newest-cni-190843) Reserved static IP address: 192.168.39.3
	I0116 03:32:29.435998 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | Getting to WaitForSSH function...
	I0116 03:32:29.436008 1016909 main.go:141] libmachine: (newest-cni-190843) Waiting for SSH to be available...
	I0116 03:32:29.439085 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | domain newest-cni-190843 has defined MAC address 52:54:00:b0:40:c6 in network mk-newest-cni-190843
	I0116 03:32:29.439435 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:b0:40:c6", ip: ""} in network mk-newest-cni-190843
	I0116 03:32:29.439471 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | unable to find defined IP address of network mk-newest-cni-190843 interface with MAC address 52:54:00:b0:40:c6
	I0116 03:32:29.439554 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | Using SSH client type: external
	I0116 03:32:29.439585 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | Using SSH private key: /home/jenkins/minikube-integration/17967-971255/.minikube/machines/newest-cni-190843/id_rsa (-rw-------)
	I0116 03:32:29.439618 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17967-971255/.minikube/machines/newest-cni-190843/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0116 03:32:29.439629 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | About to run SSH command:
	I0116 03:32:29.439640 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | exit 0
	I0116 03:32:29.443796 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | SSH cmd err, output: exit status 255: 
	I0116 03:32:29.443834 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0116 03:32:29.443853 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | command : exit 0
	I0116 03:32:29.443864 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | err     : exit status 255
	I0116 03:32:29.443913 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | output  : 
	
	
	==> CRI-O <==
	-- Journal begins at Tue 2024-01-16 03:13:23 UTC, ends at Tue 2024-01-16 03:32:33 UTC. --
	Jan 16 03:32:33 default-k8s-diff-port-775571 crio[715]: time="2024-01-16 03:32:33.346345485Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705375953346317042,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=9231f467-2d03-48ae-ad46-507ea69b76e7 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 03:32:33 default-k8s-diff-port-775571 crio[715]: time="2024-01-16 03:32:33.348066671Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=d07475b8-e28e-41c2-8f61-269d00bffa76 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 03:32:33 default-k8s-diff-port-775571 crio[715]: time="2024-01-16 03:32:33.348154602Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=d07475b8-e28e-41c2-8f61-269d00bffa76 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 03:32:33 default-k8s-diff-port-775571 crio[715]: time="2024-01-16 03:32:33.348385415Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f2b31947cd9ab2c8bd5934451dd476eedb4489db897be097f0a19bf23818e9da,PodSandboxId:61a74ac9505a932b4461b18658bb16bc362d6a18811776e82814571ec9db3fc8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705375136852504664,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c309131-3f2c-411d-9876-05424a2c3b79,},Annotations:map[string]string{io.kubernetes.container.hash: e101ede,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c87760cc0f44eb99f0cb2d610478aa2e69f84449726d201a30521127a679760,PodSandboxId:db7ec76550cb34c5db28c91510a33984c8e5c903f4f6acd4f9158d8a26abb56c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1705375135524094244,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-mk795,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b928a6ae-07af-4bc4-a0c5-b3027730459c,},Annotations:map[string]string{io.kubernetes.container.hash: 4c266b1d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort
\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd75d2109b8821b8657f2dd6c4f2c4ca736d012c15668e44c34657f674fd675f,PodSandboxId:c819f2cae9bceb42aecab2e15bce7bf8b11e7e40d1bdd57bed4fadb43b7241f7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1705375133697381276,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zw495,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: d2f3b2b8-ba3f-49d0-b12b-9e78c5867c09,},Annotations:map[string]string{io.kubernetes.container.hash: e69774ea,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19ca9f9fb82670ac732e41c9fb8af63d1fad24065000c55e0a0eb9ddae08738e,PodSandboxId:5fc17422f18dab54e9aea11b879963b8baac7b8a0e7719cafde40f3d7877077e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1705375112223677862,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-775571,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: 8c0409886914ac24a407c6ba44a14827,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4fca60077d67415eed36ea7fec522e468538059f422909136109bb3049be2c8,PodSandboxId:c915ddde32e8cd1b52b13209fc9f95bd71615bddc33fe6d6a7cb41d0c6322278,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1705375112016065160,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-775571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64be651799388f650e
19798b8a3d6fbb,},Annotations:map[string]string{io.kubernetes.container.hash: 92ed9e12,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cde9c38c1e733d3e9e470f1180254a447d2686e0139615245efc3f8e8a06e45,PodSandboxId:a4fbf180837a071cc7ec7173f14c2935d9dd5c7c942378868c616e45669d03b2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1705375111794029472,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-775571,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 5698deddf521f9a3979fbd1559af510a,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94ed68f3d4f24313e6a1fce6e4b90b946fd2225aac302fced35d46a9deda6a24,PodSandboxId:719bc39a7d56c604da7879cbaff8d6c0e4b256ef0bde3332acbe8aa755fbc78d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1705375111731047821,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-775571,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 259678fcd273c7ffaa6ec96a449bc3eb,},Annotations:map[string]string{io.kubernetes.container.hash: f0349e94,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=d07475b8-e28e-41c2-8f61-269d00bffa76 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 03:32:33 default-k8s-diff-port-775571 crio[715]: time="2024-01-16 03:32:33.403822498Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=c0d55b2a-fb46-4990-b381-ed449a288352 name=/runtime.v1.RuntimeService/Version
	Jan 16 03:32:33 default-k8s-diff-port-775571 crio[715]: time="2024-01-16 03:32:33.403940852Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=c0d55b2a-fb46-4990-b381-ed449a288352 name=/runtime.v1.RuntimeService/Version
	Jan 16 03:32:33 default-k8s-diff-port-775571 crio[715]: time="2024-01-16 03:32:33.405642415Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=7831f382-fc6b-4bb3-afdd-c0f74bb484e1 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 03:32:33 default-k8s-diff-port-775571 crio[715]: time="2024-01-16 03:32:33.406287297Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705375953406264437,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=7831f382-fc6b-4bb3-afdd-c0f74bb484e1 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 03:32:33 default-k8s-diff-port-775571 crio[715]: time="2024-01-16 03:32:33.407280932Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=275557ae-4e39-4b29-b7e1-d6a1690934b9 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 03:32:33 default-k8s-diff-port-775571 crio[715]: time="2024-01-16 03:32:33.407376233Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=275557ae-4e39-4b29-b7e1-d6a1690934b9 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 03:32:33 default-k8s-diff-port-775571 crio[715]: time="2024-01-16 03:32:33.408184872Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f2b31947cd9ab2c8bd5934451dd476eedb4489db897be097f0a19bf23818e9da,PodSandboxId:61a74ac9505a932b4461b18658bb16bc362d6a18811776e82814571ec9db3fc8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705375136852504664,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c309131-3f2c-411d-9876-05424a2c3b79,},Annotations:map[string]string{io.kubernetes.container.hash: e101ede,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c87760cc0f44eb99f0cb2d610478aa2e69f84449726d201a30521127a679760,PodSandboxId:db7ec76550cb34c5db28c91510a33984c8e5c903f4f6acd4f9158d8a26abb56c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1705375135524094244,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-mk795,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b928a6ae-07af-4bc4-a0c5-b3027730459c,},Annotations:map[string]string{io.kubernetes.container.hash: 4c266b1d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort
\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd75d2109b8821b8657f2dd6c4f2c4ca736d012c15668e44c34657f674fd675f,PodSandboxId:c819f2cae9bceb42aecab2e15bce7bf8b11e7e40d1bdd57bed4fadb43b7241f7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1705375133697381276,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zw495,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: d2f3b2b8-ba3f-49d0-b12b-9e78c5867c09,},Annotations:map[string]string{io.kubernetes.container.hash: e69774ea,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19ca9f9fb82670ac732e41c9fb8af63d1fad24065000c55e0a0eb9ddae08738e,PodSandboxId:5fc17422f18dab54e9aea11b879963b8baac7b8a0e7719cafde40f3d7877077e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1705375112223677862,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-775571,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: 8c0409886914ac24a407c6ba44a14827,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4fca60077d67415eed36ea7fec522e468538059f422909136109bb3049be2c8,PodSandboxId:c915ddde32e8cd1b52b13209fc9f95bd71615bddc33fe6d6a7cb41d0c6322278,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1705375112016065160,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-775571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64be651799388f650e
19798b8a3d6fbb,},Annotations:map[string]string{io.kubernetes.container.hash: 92ed9e12,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cde9c38c1e733d3e9e470f1180254a447d2686e0139615245efc3f8e8a06e45,PodSandboxId:a4fbf180837a071cc7ec7173f14c2935d9dd5c7c942378868c616e45669d03b2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1705375111794029472,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-775571,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 5698deddf521f9a3979fbd1559af510a,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94ed68f3d4f24313e6a1fce6e4b90b946fd2225aac302fced35d46a9deda6a24,PodSandboxId:719bc39a7d56c604da7879cbaff8d6c0e4b256ef0bde3332acbe8aa755fbc78d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1705375111731047821,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-775571,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 259678fcd273c7ffaa6ec96a449bc3eb,},Annotations:map[string]string{io.kubernetes.container.hash: f0349e94,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=275557ae-4e39-4b29-b7e1-d6a1690934b9 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 03:32:33 default-k8s-diff-port-775571 crio[715]: time="2024-01-16 03:32:33.452263577Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=02972451-8761-4e64-bbdf-fb4db9fe02e4 name=/runtime.v1.RuntimeService/Version
	Jan 16 03:32:33 default-k8s-diff-port-775571 crio[715]: time="2024-01-16 03:32:33.452377010Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=02972451-8761-4e64-bbdf-fb4db9fe02e4 name=/runtime.v1.RuntimeService/Version
	Jan 16 03:32:33 default-k8s-diff-port-775571 crio[715]: time="2024-01-16 03:32:33.454127027Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=4af06398-91f3-476d-a816-12fefa9bb178 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 03:32:33 default-k8s-diff-port-775571 crio[715]: time="2024-01-16 03:32:33.454524486Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705375953454504779,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=4af06398-91f3-476d-a816-12fefa9bb178 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 03:32:33 default-k8s-diff-port-775571 crio[715]: time="2024-01-16 03:32:33.455372998Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=a04dcef3-e286-4b54-8d45-86b5b70e0bf6 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 03:32:33 default-k8s-diff-port-775571 crio[715]: time="2024-01-16 03:32:33.455464402Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=a04dcef3-e286-4b54-8d45-86b5b70e0bf6 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 03:32:33 default-k8s-diff-port-775571 crio[715]: time="2024-01-16 03:32:33.455765476Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f2b31947cd9ab2c8bd5934451dd476eedb4489db897be097f0a19bf23818e9da,PodSandboxId:61a74ac9505a932b4461b18658bb16bc362d6a18811776e82814571ec9db3fc8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705375136852504664,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c309131-3f2c-411d-9876-05424a2c3b79,},Annotations:map[string]string{io.kubernetes.container.hash: e101ede,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c87760cc0f44eb99f0cb2d610478aa2e69f84449726d201a30521127a679760,PodSandboxId:db7ec76550cb34c5db28c91510a33984c8e5c903f4f6acd4f9158d8a26abb56c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1705375135524094244,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-mk795,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b928a6ae-07af-4bc4-a0c5-b3027730459c,},Annotations:map[string]string{io.kubernetes.container.hash: 4c266b1d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort
\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd75d2109b8821b8657f2dd6c4f2c4ca736d012c15668e44c34657f674fd675f,PodSandboxId:c819f2cae9bceb42aecab2e15bce7bf8b11e7e40d1bdd57bed4fadb43b7241f7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1705375133697381276,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zw495,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: d2f3b2b8-ba3f-49d0-b12b-9e78c5867c09,},Annotations:map[string]string{io.kubernetes.container.hash: e69774ea,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19ca9f9fb82670ac732e41c9fb8af63d1fad24065000c55e0a0eb9ddae08738e,PodSandboxId:5fc17422f18dab54e9aea11b879963b8baac7b8a0e7719cafde40f3d7877077e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1705375112223677862,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-775571,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: 8c0409886914ac24a407c6ba44a14827,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4fca60077d67415eed36ea7fec522e468538059f422909136109bb3049be2c8,PodSandboxId:c915ddde32e8cd1b52b13209fc9f95bd71615bddc33fe6d6a7cb41d0c6322278,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1705375112016065160,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-775571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64be651799388f650e
19798b8a3d6fbb,},Annotations:map[string]string{io.kubernetes.container.hash: 92ed9e12,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cde9c38c1e733d3e9e470f1180254a447d2686e0139615245efc3f8e8a06e45,PodSandboxId:a4fbf180837a071cc7ec7173f14c2935d9dd5c7c942378868c616e45669d03b2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1705375111794029472,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-775571,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 5698deddf521f9a3979fbd1559af510a,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94ed68f3d4f24313e6a1fce6e4b90b946fd2225aac302fced35d46a9deda6a24,PodSandboxId:719bc39a7d56c604da7879cbaff8d6c0e4b256ef0bde3332acbe8aa755fbc78d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1705375111731047821,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-775571,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 259678fcd273c7ffaa6ec96a449bc3eb,},Annotations:map[string]string{io.kubernetes.container.hash: f0349e94,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=a04dcef3-e286-4b54-8d45-86b5b70e0bf6 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 03:32:33 default-k8s-diff-port-775571 crio[715]: time="2024-01-16 03:32:33.499842817Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=30ca278b-06f0-467a-9e12-fbceaabe06ec name=/runtime.v1.RuntimeService/Version
	Jan 16 03:32:33 default-k8s-diff-port-775571 crio[715]: time="2024-01-16 03:32:33.499924675Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=30ca278b-06f0-467a-9e12-fbceaabe06ec name=/runtime.v1.RuntimeService/Version
	Jan 16 03:32:33 default-k8s-diff-port-775571 crio[715]: time="2024-01-16 03:32:33.502679455Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=3f86f19a-2354-4de0-866b-de0692046721 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 03:32:33 default-k8s-diff-port-775571 crio[715]: time="2024-01-16 03:32:33.503139690Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705375953503115489,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=3f86f19a-2354-4de0-866b-de0692046721 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 03:32:33 default-k8s-diff-port-775571 crio[715]: time="2024-01-16 03:32:33.503926223Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=5256e5e0-1cd1-4210-a128-3f51bb9109c9 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 03:32:33 default-k8s-diff-port-775571 crio[715]: time="2024-01-16 03:32:33.503972828Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=5256e5e0-1cd1-4210-a128-3f51bb9109c9 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 03:32:33 default-k8s-diff-port-775571 crio[715]: time="2024-01-16 03:32:33.504133660Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f2b31947cd9ab2c8bd5934451dd476eedb4489db897be097f0a19bf23818e9da,PodSandboxId:61a74ac9505a932b4461b18658bb16bc362d6a18811776e82814571ec9db3fc8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705375136852504664,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c309131-3f2c-411d-9876-05424a2c3b79,},Annotations:map[string]string{io.kubernetes.container.hash: e101ede,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c87760cc0f44eb99f0cb2d610478aa2e69f84449726d201a30521127a679760,PodSandboxId:db7ec76550cb34c5db28c91510a33984c8e5c903f4f6acd4f9158d8a26abb56c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1705375135524094244,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-mk795,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b928a6ae-07af-4bc4-a0c5-b3027730459c,},Annotations:map[string]string{io.kubernetes.container.hash: 4c266b1d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort
\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd75d2109b8821b8657f2dd6c4f2c4ca736d012c15668e44c34657f674fd675f,PodSandboxId:c819f2cae9bceb42aecab2e15bce7bf8b11e7e40d1bdd57bed4fadb43b7241f7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1705375133697381276,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zw495,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: d2f3b2b8-ba3f-49d0-b12b-9e78c5867c09,},Annotations:map[string]string{io.kubernetes.container.hash: e69774ea,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19ca9f9fb82670ac732e41c9fb8af63d1fad24065000c55e0a0eb9ddae08738e,PodSandboxId:5fc17422f18dab54e9aea11b879963b8baac7b8a0e7719cafde40f3d7877077e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1705375112223677862,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-775571,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: 8c0409886914ac24a407c6ba44a14827,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4fca60077d67415eed36ea7fec522e468538059f422909136109bb3049be2c8,PodSandboxId:c915ddde32e8cd1b52b13209fc9f95bd71615bddc33fe6d6a7cb41d0c6322278,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1705375112016065160,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-775571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64be651799388f650e
19798b8a3d6fbb,},Annotations:map[string]string{io.kubernetes.container.hash: 92ed9e12,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cde9c38c1e733d3e9e470f1180254a447d2686e0139615245efc3f8e8a06e45,PodSandboxId:a4fbf180837a071cc7ec7173f14c2935d9dd5c7c942378868c616e45669d03b2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1705375111794029472,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-775571,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 5698deddf521f9a3979fbd1559af510a,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94ed68f3d4f24313e6a1fce6e4b90b946fd2225aac302fced35d46a9deda6a24,PodSandboxId:719bc39a7d56c604da7879cbaff8d6c0e4b256ef0bde3332acbe8aa755fbc78d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1705375111731047821,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-775571,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 259678fcd273c7ffaa6ec96a449bc3eb,},Annotations:map[string]string{io.kubernetes.container.hash: f0349e94,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=5256e5e0-1cd1-4210-a128-3f51bb9109c9 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f2b31947cd9ab       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   13 minutes ago      Running             storage-provisioner       0                   61a74ac9505a9       storage-provisioner
	8c87760cc0f44       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   13 minutes ago      Running             coredns                   0                   db7ec76550cb3       coredns-5dd5756b68-mk795
	cd75d2109b882       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e   13 minutes ago      Running             kube-proxy                0                   c819f2cae9bce       kube-proxy-zw495
	19ca9f9fb8267       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1   14 minutes ago      Running             kube-scheduler            2                   5fc17422f18da       kube-scheduler-default-k8s-diff-port-775571
	c4fca60077d67       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   14 minutes ago      Running             etcd                      2                   c915ddde32e8c       etcd-default-k8s-diff-port-775571
	7cde9c38c1e73       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591   14 minutes ago      Running             kube-controller-manager   2                   a4fbf180837a0       kube-controller-manager-default-k8s-diff-port-775571
	94ed68f3d4f24       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257   14 minutes ago      Running             kube-apiserver            2                   719bc39a7d56c       kube-apiserver-default-k8s-diff-port-775571
	
	
	==> coredns [8c87760cc0f44eb99f0cb2d610478aa2e69f84449726d201a30521127a679760] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = efc9280483fcc4efbcb768f0bb0eae8d655d9a6698f75ec2e812845ddebe357ede76757b8e27e6a6ace121d67e74d46126ca7de2feaa2fdd891a0e8e676dd4cb
	[INFO] Reloading complete
	[INFO] 127.0.0.1:57520 - 6359 "HINFO IN 6562304830807243736.8044346787423104161. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009863034s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-775571
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-775571
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=880ec6d67bbc7fe8882aaa6543d5a07427c973fd
	                    minikube.k8s.io/name=default-k8s-diff-port-775571
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_16T03_18_40_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Jan 2024 03:18:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-775571
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Jan 2024 03:32:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Jan 2024 03:29:14 +0000   Tue, 16 Jan 2024 03:18:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Jan 2024 03:29:14 +0000   Tue, 16 Jan 2024 03:18:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Jan 2024 03:29:14 +0000   Tue, 16 Jan 2024 03:18:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Jan 2024 03:29:14 +0000   Tue, 16 Jan 2024 03:18:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.158
	  Hostname:    default-k8s-diff-port-775571
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 16cfbd7b5e9d4c779239e348cab0eaeb
	  System UUID:                16cfbd7b-5e9d-4c77-9239-e348cab0eaeb
	  Boot ID:                    46f4f379-8263-499e-bd43-2573973e73a1
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-mk795                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 etcd-default-k8s-diff-port-775571                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 kube-apiserver-default-k8s-diff-port-775571             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-775571    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-proxy-zw495                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-default-k8s-diff-port-775571             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 metrics-server-57f55c9bc5-928d7                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeHasSufficientMemory  14m (x8 over 14m)  kubelet          Node default-k8s-diff-port-775571 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m (x8 over 14m)  kubelet          Node default-k8s-diff-port-775571 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m (x7 over 14m)  kubelet          Node default-k8s-diff-port-775571 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m                kubelet          Node default-k8s-diff-port-775571 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m                kubelet          Node default-k8s-diff-port-775571 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m                kubelet          Node default-k8s-diff-port-775571 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             13m                kubelet          Node default-k8s-diff-port-775571 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                13m                kubelet          Node default-k8s-diff-port-775571 status is now: NodeReady
	  Normal  RegisteredNode           13m                node-controller  Node default-k8s-diff-port-775571 event: Registered Node default-k8s-diff-port-775571 in Controller
	
	
	==> dmesg <==
	[Jan16 03:13] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.073507] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.929847] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.645523] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.153043] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.492146] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.282122] systemd-fstab-generator[641]: Ignoring "noauto" for root device
	[  +0.156855] systemd-fstab-generator[652]: Ignoring "noauto" for root device
	[  +0.201235] systemd-fstab-generator[665]: Ignoring "noauto" for root device
	[  +0.169795] systemd-fstab-generator[676]: Ignoring "noauto" for root device
	[  +0.281013] systemd-fstab-generator[700]: Ignoring "noauto" for root device
	[ +18.101883] systemd-fstab-generator[915]: Ignoring "noauto" for root device
	[Jan16 03:14] kauditd_printk_skb: 29 callbacks suppressed
	[Jan16 03:18] systemd-fstab-generator[3538]: Ignoring "noauto" for root device
	[  +9.816044] systemd-fstab-generator[3865]: Ignoring "noauto" for root device
	[ +14.143912] kauditd_printk_skb: 2 callbacks suppressed
	
	
	==> etcd [c4fca60077d67415eed36ea7fec522e468538059f422909136109bb3049be2c8] <==
	{"level":"info","ts":"2024-01-16T03:18:33.854787Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.72.158:2380"}
	{"level":"info","ts":"2024-01-16T03:18:33.85497Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.72.158:2380"}
	{"level":"info","ts":"2024-01-16T03:18:33.859952Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"244d86dcb1337571 switched to configuration voters=(2615895240995992945)"}
	{"level":"info","ts":"2024-01-16T03:18:33.864055Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"c08228541f5dd967","local-member-id":"244d86dcb1337571","added-peer-id":"244d86dcb1337571","added-peer-peer-urls":["https://192.168.72.158:2380"]}
	{"level":"info","ts":"2024-01-16T03:18:34.100684Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"244d86dcb1337571 is starting a new election at term 1"}
	{"level":"info","ts":"2024-01-16T03:18:34.100791Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"244d86dcb1337571 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-01-16T03:18:34.100841Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"244d86dcb1337571 received MsgPreVoteResp from 244d86dcb1337571 at term 1"}
	{"level":"info","ts":"2024-01-16T03:18:34.100875Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"244d86dcb1337571 became candidate at term 2"}
	{"level":"info","ts":"2024-01-16T03:18:34.100899Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"244d86dcb1337571 received MsgVoteResp from 244d86dcb1337571 at term 2"}
	{"level":"info","ts":"2024-01-16T03:18:34.100926Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"244d86dcb1337571 became leader at term 2"}
	{"level":"info","ts":"2024-01-16T03:18:34.100951Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 244d86dcb1337571 elected leader 244d86dcb1337571 at term 2"}
	{"level":"info","ts":"2024-01-16T03:18:34.106788Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-16T03:18:34.110963Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"244d86dcb1337571","local-member-attributes":"{Name:default-k8s-diff-port-775571 ClientURLs:[https://192.168.72.158:2379]}","request-path":"/0/members/244d86dcb1337571/attributes","cluster-id":"c08228541f5dd967","publish-timeout":"7s"}
	{"level":"info","ts":"2024-01-16T03:18:34.111514Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-16T03:18:34.112717Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"c08228541f5dd967","local-member-id":"244d86dcb1337571","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-16T03:18:34.112809Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-16T03:18:34.112861Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-16T03:18:34.112905Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-16T03:18:34.113903Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-01-16T03:18:34.116673Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-01-16T03:18:34.116767Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-01-16T03:18:34.137975Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.158:2379"}
	{"level":"info","ts":"2024-01-16T03:28:34.587354Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":715}
	{"level":"info","ts":"2024-01-16T03:28:34.591275Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":715,"took":"3.026721ms","hash":526033480}
	{"level":"info","ts":"2024-01-16T03:28:34.591374Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":526033480,"revision":715,"compact-revision":-1}
	
	
	==> kernel <==
	 03:32:33 up 19 min,  0 users,  load average: 0.07, 0.20, 0.22
	Linux default-k8s-diff-port-775571 5.10.57 #1 SMP Thu Dec 28 22:04:21 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [94ed68f3d4f24313e6a1fce6e4b90b946fd2225aac302fced35d46a9deda6a24] <==
	I0116 03:28:36.609426       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0116 03:28:37.609401       1 handler_proxy.go:93] no RequestInfo found in the context
	W0116 03:28:37.609420       1 handler_proxy.go:93] no RequestInfo found in the context
	E0116 03:28:37.609681       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0116 03:28:37.609689       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0116 03:28:37.609744       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0116 03:28:37.611768       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0116 03:29:36.433151       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0116 03:29:37.610082       1 handler_proxy.go:93] no RequestInfo found in the context
	E0116 03:29:37.610360       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0116 03:29:37.610413       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0116 03:29:37.612510       1 handler_proxy.go:93] no RequestInfo found in the context
	E0116 03:29:37.612656       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0116 03:29:37.612671       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0116 03:30:36.433531       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0116 03:31:36.433152       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0116 03:31:37.611656       1 handler_proxy.go:93] no RequestInfo found in the context
	E0116 03:31:37.611821       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0116 03:31:37.611878       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0116 03:31:37.613878       1 handler_proxy.go:93] no RequestInfo found in the context
	E0116 03:31:37.614004       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0116 03:31:37.614047       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [7cde9c38c1e733d3e9e470f1180254a447d2686e0139615245efc3f8e8a06e45] <==
	I0116 03:26:52.179633       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0116 03:27:21.750035       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0116 03:27:22.190356       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0116 03:27:51.757176       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0116 03:27:52.200395       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0116 03:28:21.769142       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0116 03:28:22.209772       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0116 03:28:51.777032       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0116 03:28:52.220717       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0116 03:29:21.783278       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0116 03:29:22.229810       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0116 03:29:51.793373       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0116 03:29:52.240241       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0116 03:30:08.137974       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="401.929µs"
	I0116 03:30:21.132991       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="149.476µs"
	E0116 03:30:21.799809       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0116 03:30:22.254471       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0116 03:30:51.814090       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0116 03:30:52.265782       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0116 03:31:21.822651       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0116 03:31:22.277334       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0116 03:31:51.830920       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0116 03:31:52.287703       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0116 03:32:21.837760       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0116 03:32:22.300293       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [cd75d2109b8821b8657f2dd6c4f2c4ca736d012c15668e44c34657f674fd675f] <==
	I0116 03:18:56.536685       1 server_others.go:69] "Using iptables proxy"
	I0116 03:18:56.614974       1 node.go:141] Successfully retrieved node IP: 192.168.72.158
	I0116 03:18:56.809256       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0116 03:18:56.809399       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0116 03:18:56.813336       1 server_others.go:152] "Using iptables Proxier"
	I0116 03:18:56.813907       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0116 03:18:56.817464       1 server.go:846] "Version info" version="v1.28.4"
	I0116 03:18:56.817888       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0116 03:18:56.839188       1 config.go:188] "Starting service config controller"
	I0116 03:18:56.840302       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0116 03:18:56.840389       1 config.go:97] "Starting endpoint slice config controller"
	I0116 03:18:56.840399       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0116 03:18:56.857121       1 config.go:315] "Starting node config controller"
	I0116 03:18:56.857277       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0116 03:18:56.941278       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0116 03:18:56.943076       1 shared_informer.go:318] Caches are synced for service config
	I0116 03:18:56.961222       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [19ca9f9fb82670ac732e41c9fb8af63d1fad24065000c55e0a0eb9ddae08738e] <==
	W0116 03:18:37.635976       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0116 03:18:37.636039       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0116 03:18:37.676555       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0116 03:18:37.676692       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0116 03:18:37.700836       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0116 03:18:37.700961       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0116 03:18:37.721544       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0116 03:18:37.721750       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0116 03:18:37.775958       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0116 03:18:37.776067       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0116 03:18:37.833044       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0116 03:18:37.833163       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0116 03:18:37.847794       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0116 03:18:37.847889       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0116 03:18:37.851160       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0116 03:18:37.851231       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0116 03:18:37.949133       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0116 03:18:37.949224       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0116 03:18:37.979886       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0116 03:18:37.980003       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0116 03:18:38.010744       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0116 03:18:38.010842       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0116 03:18:38.111757       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0116 03:18:38.111822       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0116 03:18:40.889858       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Tue 2024-01-16 03:13:23 UTC, ends at Tue 2024-01-16 03:32:34 UTC. --
	Jan 16 03:29:40 default-k8s-diff-port-775571 kubelet[3872]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 16 03:29:42 default-k8s-diff-port-775571 kubelet[3872]: E0116 03:29:42.113448    3872 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-928d7" podUID="d3671063-27a1-4ad8-9f5f-b3e01205f483"
	Jan 16 03:29:54 default-k8s-diff-port-775571 kubelet[3872]: E0116 03:29:54.124701    3872 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jan 16 03:29:54 default-k8s-diff-port-775571 kubelet[3872]: E0116 03:29:54.124782    3872 kuberuntime_image.go:53] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jan 16 03:29:54 default-k8s-diff-port-775571 kubelet[3872]: E0116 03:29:54.125039    3872 kuberuntime_manager.go:1261] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-lss8w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe
:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessa
gePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-57f55c9bc5-928d7_kube-system(d3671063-27a1-4ad8-9f5f-b3e01205f483): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jan 16 03:29:54 default-k8s-diff-port-775571 kubelet[3872]: E0116 03:29:54.125081    3872 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-57f55c9bc5-928d7" podUID="d3671063-27a1-4ad8-9f5f-b3e01205f483"
	Jan 16 03:30:08 default-k8s-diff-port-775571 kubelet[3872]: E0116 03:30:08.115796    3872 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-928d7" podUID="d3671063-27a1-4ad8-9f5f-b3e01205f483"
	Jan 16 03:30:21 default-k8s-diff-port-775571 kubelet[3872]: E0116 03:30:21.113013    3872 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-928d7" podUID="d3671063-27a1-4ad8-9f5f-b3e01205f483"
	Jan 16 03:30:36 default-k8s-diff-port-775571 kubelet[3872]: E0116 03:30:36.114317    3872 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-928d7" podUID="d3671063-27a1-4ad8-9f5f-b3e01205f483"
	Jan 16 03:30:40 default-k8s-diff-port-775571 kubelet[3872]: E0116 03:30:40.225859    3872 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 16 03:30:40 default-k8s-diff-port-775571 kubelet[3872]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 16 03:30:40 default-k8s-diff-port-775571 kubelet[3872]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 16 03:30:40 default-k8s-diff-port-775571 kubelet[3872]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 16 03:30:51 default-k8s-diff-port-775571 kubelet[3872]: E0116 03:30:51.113392    3872 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-928d7" podUID="d3671063-27a1-4ad8-9f5f-b3e01205f483"
	Jan 16 03:31:05 default-k8s-diff-port-775571 kubelet[3872]: E0116 03:31:05.112907    3872 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-928d7" podUID="d3671063-27a1-4ad8-9f5f-b3e01205f483"
	Jan 16 03:31:17 default-k8s-diff-port-775571 kubelet[3872]: E0116 03:31:17.113654    3872 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-928d7" podUID="d3671063-27a1-4ad8-9f5f-b3e01205f483"
	Jan 16 03:31:32 default-k8s-diff-port-775571 kubelet[3872]: E0116 03:31:32.112943    3872 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-928d7" podUID="d3671063-27a1-4ad8-9f5f-b3e01205f483"
	Jan 16 03:31:40 default-k8s-diff-port-775571 kubelet[3872]: E0116 03:31:40.224531    3872 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 16 03:31:40 default-k8s-diff-port-775571 kubelet[3872]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 16 03:31:40 default-k8s-diff-port-775571 kubelet[3872]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 16 03:31:40 default-k8s-diff-port-775571 kubelet[3872]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 16 03:31:47 default-k8s-diff-port-775571 kubelet[3872]: E0116 03:31:47.112745    3872 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-928d7" podUID="d3671063-27a1-4ad8-9f5f-b3e01205f483"
	Jan 16 03:31:59 default-k8s-diff-port-775571 kubelet[3872]: E0116 03:31:59.115022    3872 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-928d7" podUID="d3671063-27a1-4ad8-9f5f-b3e01205f483"
	Jan 16 03:32:12 default-k8s-diff-port-775571 kubelet[3872]: E0116 03:32:12.117242    3872 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-928d7" podUID="d3671063-27a1-4ad8-9f5f-b3e01205f483"
	Jan 16 03:32:27 default-k8s-diff-port-775571 kubelet[3872]: E0116 03:32:27.113148    3872 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-928d7" podUID="d3671063-27a1-4ad8-9f5f-b3e01205f483"
	
	
	==> storage-provisioner [f2b31947cd9ab2c8bd5934451dd476eedb4489db897be097f0a19bf23818e9da] <==
	I0116 03:18:57.087281       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0116 03:18:57.107530       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0116 03:18:57.107759       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0116 03:18:57.122077       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0116 03:18:57.122324       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-775571_9420944d-9631-4a43-8dbd-48fb909c7d8a!
	I0116 03:18:57.130264       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0445ba12-cf52-479b-873a-eccc1627ec07", APIVersion:"v1", ResourceVersion:"458", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-775571_9420944d-9631-4a43-8dbd-48fb909c7d8a became leader
	I0116 03:18:57.223212       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-775571_9420944d-9631-4a43-8dbd-48fb909c7d8a!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-775571 -n default-k8s-diff-port-775571
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-775571 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-928d7
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-775571 describe pod metrics-server-57f55c9bc5-928d7
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-775571 describe pod metrics-server-57f55c9bc5-928d7: exit status 1 (83.920775ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-928d7" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-775571 describe pod metrics-server-57f55c9bc5-928d7: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (542.71s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (510.93s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0116 03:24:50.169732  978482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/ingress-addon-legacy-473102/client.crt: no such file or directory
E0116 03:26:13.218656  978482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/ingress-addon-legacy-473102/client.crt: no such file or directory
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-934668 -n no-preload-934668
start_stop_delete_test.go:274: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-01-16 03:32:33.584923922 +0000 UTC m=+5531.468750505
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-934668 -n no-preload-934668
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-934668 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-934668 logs -n 25: (1.541096368s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| pause   | -p pause-100619                                        | pause-100619                 | jenkins | v1.32.0 | 16 Jan 24 03:03 UTC | 16 Jan 24 03:03 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| unpause | -p pause-100619                                        | pause-100619                 | jenkins | v1.32.0 | 16 Jan 24 03:03 UTC | 16 Jan 24 03:03 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| pause   | -p pause-100619                                        | pause-100619                 | jenkins | v1.32.0 | 16 Jan 24 03:03 UTC | 16 Jan 24 03:03 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| delete  | -p pause-100619                                        | pause-100619                 | jenkins | v1.32.0 | 16 Jan 24 03:03 UTC | 16 Jan 24 03:03 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| delete  | -p pause-100619                                        | pause-100619                 | jenkins | v1.32.0 | 16 Jan 24 03:03 UTC | 16 Jan 24 03:03 UTC |
	| delete  | -p                                                     | disable-driver-mounts-807979 | jenkins | v1.32.0 | 16 Jan 24 03:03 UTC | 16 Jan 24 03:03 UTC |
	|         | disable-driver-mounts-807979                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-775571 | jenkins | v1.32.0 | 16 Jan 24 03:03 UTC | 16 Jan 24 03:06 UTC |
	|         | default-k8s-diff-port-775571                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-934668             | no-preload-934668            | jenkins | v1.32.0 | 16 Jan 24 03:05 UTC | 16 Jan 24 03:05 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-934668                                   | no-preload-934668            | jenkins | v1.32.0 | 16 Jan 24 03:05 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-480663            | embed-certs-480663           | jenkins | v1.32.0 | 16 Jan 24 03:05 UTC | 16 Jan 24 03:05 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-480663                                  | embed-certs-480663           | jenkins | v1.32.0 | 16 Jan 24 03:05 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-788237        | old-k8s-version-788237       | jenkins | v1.32.0 | 16 Jan 24 03:05 UTC | 16 Jan 24 03:05 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-788237                              | old-k8s-version-788237       | jenkins | v1.32.0 | 16 Jan 24 03:05 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-775571  | default-k8s-diff-port-775571 | jenkins | v1.32.0 | 16 Jan 24 03:06 UTC | 16 Jan 24 03:06 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-775571 | jenkins | v1.32.0 | 16 Jan 24 03:06 UTC |                     |
	|         | default-k8s-diff-port-775571                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-934668                  | no-preload-934668            | jenkins | v1.32.0 | 16 Jan 24 03:07 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-480663                 | embed-certs-480663           | jenkins | v1.32.0 | 16 Jan 24 03:07 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-934668                                   | no-preload-934668            | jenkins | v1.32.0 | 16 Jan 24 03:07 UTC | 16 Jan 24 03:24 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| start   | -p embed-certs-480663                                  | embed-certs-480663           | jenkins | v1.32.0 | 16 Jan 24 03:07 UTC | 16 Jan 24 03:17 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-788237             | old-k8s-version-788237       | jenkins | v1.32.0 | 16 Jan 24 03:08 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-788237                              | old-k8s-version-788237       | jenkins | v1.32.0 | 16 Jan 24 03:08 UTC | 16 Jan 24 03:20 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-775571       | default-k8s-diff-port-775571 | jenkins | v1.32.0 | 16 Jan 24 03:08 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-775571 | jenkins | v1.32.0 | 16 Jan 24 03:08 UTC | 16 Jan 24 03:23 UTC |
	|         | default-k8s-diff-port-775571                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-788237                              | old-k8s-version-788237       | jenkins | v1.32.0 | 16 Jan 24 03:32 UTC | 16 Jan 24 03:32 UTC |
	| start   | -p newest-cni-190843 --memory=2200 --alsologtostderr   | newest-cni-190843            | jenkins | v1.32.0 | 16 Jan 24 03:32 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/16 03:32:05
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0116 03:32:05.212846 1016909 out.go:296] Setting OutFile to fd 1 ...
	I0116 03:32:05.213127 1016909 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 03:32:05.213137 1016909 out.go:309] Setting ErrFile to fd 2...
	I0116 03:32:05.213142 1016909 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 03:32:05.213327 1016909 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17967-971255/.minikube/bin
	I0116 03:32:05.214010 1016909 out.go:303] Setting JSON to false
	I0116 03:32:05.215295 1016909 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":15275,"bootTime":1705360651,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1048-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0116 03:32:05.215360 1016909 start.go:138] virtualization: kvm guest
	I0116 03:32:05.218234 1016909 out.go:177] * [newest-cni-190843] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0116 03:32:05.219856 1016909 out.go:177]   - MINIKUBE_LOCATION=17967
	I0116 03:32:05.219856 1016909 notify.go:220] Checking for updates...
	I0116 03:32:05.223126 1016909 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0116 03:32:05.225146 1016909 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17967-971255/kubeconfig
	I0116 03:32:05.226713 1016909 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17967-971255/.minikube
	I0116 03:32:05.228305 1016909 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0116 03:32:05.229952 1016909 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0116 03:32:05.232014 1016909 config.go:182] Loaded profile config "default-k8s-diff-port-775571": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 03:32:05.232132 1016909 config.go:182] Loaded profile config "embed-certs-480663": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 03:32:05.232253 1016909 config.go:182] Loaded profile config "no-preload-934668": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0116 03:32:05.232506 1016909 driver.go:392] Setting default libvirt URI to qemu:///system
	I0116 03:32:05.273796 1016909 out.go:177] * Using the kvm2 driver based on user configuration
	I0116 03:32:05.275294 1016909 start.go:298] selected driver: kvm2
	I0116 03:32:05.275312 1016909 start.go:902] validating driver "kvm2" against <nil>
	I0116 03:32:05.275368 1016909 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0116 03:32:05.276237 1016909 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0116 03:32:05.276324 1016909 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17967-971255/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0116 03:32:05.292907 1016909 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0116 03:32:05.292964 1016909 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	W0116 03:32:05.293049 1016909 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0116 03:32:05.293316 1016909 start_flags.go:946] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0116 03:32:05.293389 1016909 cni.go:84] Creating CNI manager for ""
	I0116 03:32:05.293405 1016909 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 03:32:05.293417 1016909 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0116 03:32:05.293426 1016909 start_flags.go:321] config:
	{Name:newest-cni-190843 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-190843 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 03:32:05.293661 1016909 iso.go:125] acquiring lock: {Name:mkc0ce0b4b435c5fb7570521b794e5982bb7bf6d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0116 03:32:05.295893 1016909 out.go:177] * Starting control plane node newest-cni-190843 in cluster newest-cni-190843
	I0116 03:32:05.297239 1016909 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0116 03:32:05.297285 1016909 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17967-971255/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4
	I0116 03:32:05.297295 1016909 cache.go:56] Caching tarball of preloaded images
	I0116 03:32:05.297432 1016909 preload.go:174] Found /home/jenkins/minikube-integration/17967-971255/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0116 03:32:05.297447 1016909 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.2 on crio
	I0116 03:32:05.297603 1016909 profile.go:148] Saving config to /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/newest-cni-190843/config.json ...
	I0116 03:32:05.297631 1016909 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/newest-cni-190843/config.json: {Name:mk7d619c3c7ca0d35f7ef7967861c85d73a75388 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:32:05.297892 1016909 start.go:365] acquiring machines lock for newest-cni-190843: {Name:mk61734672272adcae1bdaf20e72828e69d219db Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0116 03:32:05.297959 1016909 start.go:369] acquired machines lock for "newest-cni-190843" in 41.247µs
	I0116 03:32:05.297992 1016909 start.go:93] Provisioning new machine with config: &{Name:newest-cni-190843 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-190843 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0116 03:32:05.298108 1016909 start.go:125] createHost starting for "" (driver="kvm2")
	I0116 03:32:05.299905 1016909 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0116 03:32:05.300049 1016909 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:32:05.300094 1016909 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:32:05.316399 1016909 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44549
	I0116 03:32:05.317031 1016909 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:32:05.317606 1016909 main.go:141] libmachine: Using API Version  1
	I0116 03:32:05.317632 1016909 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:32:05.318081 1016909 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:32:05.318297 1016909 main.go:141] libmachine: (newest-cni-190843) Calling .GetMachineName
	I0116 03:32:05.318486 1016909 main.go:141] libmachine: (newest-cni-190843) Calling .DriverName
	I0116 03:32:05.318656 1016909 start.go:159] libmachine.API.Create for "newest-cni-190843" (driver="kvm2")
	I0116 03:32:05.318690 1016909 client.go:168] LocalClient.Create starting
	I0116 03:32:05.318739 1016909 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca.pem
	I0116 03:32:05.318781 1016909 main.go:141] libmachine: Decoding PEM data...
	I0116 03:32:05.318806 1016909 main.go:141] libmachine: Parsing certificate...
	I0116 03:32:05.318882 1016909 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17967-971255/.minikube/certs/cert.pem
	I0116 03:32:05.318903 1016909 main.go:141] libmachine: Decoding PEM data...
	I0116 03:32:05.318915 1016909 main.go:141] libmachine: Parsing certificate...
	I0116 03:32:05.318934 1016909 main.go:141] libmachine: Running pre-create checks...
	I0116 03:32:05.318944 1016909 main.go:141] libmachine: (newest-cni-190843) Calling .PreCreateCheck
	I0116 03:32:05.319384 1016909 main.go:141] libmachine: (newest-cni-190843) Calling .GetConfigRaw
	I0116 03:32:05.319892 1016909 main.go:141] libmachine: Creating machine...
	I0116 03:32:05.319907 1016909 main.go:141] libmachine: (newest-cni-190843) Calling .Create
	I0116 03:32:05.320070 1016909 main.go:141] libmachine: (newest-cni-190843) Creating KVM machine...
	I0116 03:32:05.321609 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | found existing default KVM network
	I0116 03:32:05.323542 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | I0116 03:32:05.323355 1016931 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000147f10}
	I0116 03:32:05.330248 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | trying to create private KVM network mk-newest-cni-190843 192.168.39.0/24...
	I0116 03:32:05.415343 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | private KVM network mk-newest-cni-190843 192.168.39.0/24 created
	I0116 03:32:05.415384 1016909 main.go:141] libmachine: (newest-cni-190843) Setting up store path in /home/jenkins/minikube-integration/17967-971255/.minikube/machines/newest-cni-190843 ...
	I0116 03:32:05.415412 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | I0116 03:32:05.415316 1016931 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17967-971255/.minikube
	I0116 03:32:05.415434 1016909 main.go:141] libmachine: (newest-cni-190843) Building disk image from file:///home/jenkins/minikube-integration/17967-971255/.minikube/cache/iso/amd64/minikube-v1.32.1-1703784139-17866-amd64.iso
	I0116 03:32:05.415632 1016909 main.go:141] libmachine: (newest-cni-190843) Downloading /home/jenkins/minikube-integration/17967-971255/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17967-971255/.minikube/cache/iso/amd64/minikube-v1.32.1-1703784139-17866-amd64.iso...
	I0116 03:32:05.668581 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | I0116 03:32:05.668448 1016931 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17967-971255/.minikube/machines/newest-cni-190843/id_rsa...
	I0116 03:32:05.751163 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | I0116 03:32:05.751025 1016931 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17967-971255/.minikube/machines/newest-cni-190843/newest-cni-190843.rawdisk...
	I0116 03:32:05.751206 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | Writing magic tar header
	I0116 03:32:05.751232 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | Writing SSH key tar header
	I0116 03:32:05.751393 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | I0116 03:32:05.751314 1016931 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17967-971255/.minikube/machines/newest-cni-190843 ...
	I0116 03:32:05.751491 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17967-971255/.minikube/machines/newest-cni-190843
	I0116 03:32:05.751527 1016909 main.go:141] libmachine: (newest-cni-190843) Setting executable bit set on /home/jenkins/minikube-integration/17967-971255/.minikube/machines/newest-cni-190843 (perms=drwx------)
	I0116 03:32:05.751544 1016909 main.go:141] libmachine: (newest-cni-190843) Setting executable bit set on /home/jenkins/minikube-integration/17967-971255/.minikube/machines (perms=drwxr-xr-x)
	I0116 03:32:05.751557 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17967-971255/.minikube/machines
	I0116 03:32:05.751594 1016909 main.go:141] libmachine: (newest-cni-190843) Setting executable bit set on /home/jenkins/minikube-integration/17967-971255/.minikube (perms=drwxr-xr-x)
	I0116 03:32:05.751621 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17967-971255/.minikube
	I0116 03:32:05.751636 1016909 main.go:141] libmachine: (newest-cni-190843) Setting executable bit set on /home/jenkins/minikube-integration/17967-971255 (perms=drwxrwxr-x)
	I0116 03:32:05.751652 1016909 main.go:141] libmachine: (newest-cni-190843) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0116 03:32:05.751669 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17967-971255
	I0116 03:32:05.751712 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0116 03:32:05.751727 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | Checking permissions on dir: /home/jenkins
	I0116 03:32:05.751737 1016909 main.go:141] libmachine: (newest-cni-190843) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0116 03:32:05.751748 1016909 main.go:141] libmachine: (newest-cni-190843) Creating domain...
	I0116 03:32:05.751760 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | Checking permissions on dir: /home
	I0116 03:32:05.751772 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | Skipping /home - not owner
	I0116 03:32:05.753038 1016909 main.go:141] libmachine: (newest-cni-190843) define libvirt domain using xml: 
	I0116 03:32:05.753063 1016909 main.go:141] libmachine: (newest-cni-190843) <domain type='kvm'>
	I0116 03:32:05.753074 1016909 main.go:141] libmachine: (newest-cni-190843)   <name>newest-cni-190843</name>
	I0116 03:32:05.753086 1016909 main.go:141] libmachine: (newest-cni-190843)   <memory unit='MiB'>2200</memory>
	I0116 03:32:05.753096 1016909 main.go:141] libmachine: (newest-cni-190843)   <vcpu>2</vcpu>
	I0116 03:32:05.753109 1016909 main.go:141] libmachine: (newest-cni-190843)   <features>
	I0116 03:32:05.753122 1016909 main.go:141] libmachine: (newest-cni-190843)     <acpi/>
	I0116 03:32:05.753142 1016909 main.go:141] libmachine: (newest-cni-190843)     <apic/>
	I0116 03:32:05.753158 1016909 main.go:141] libmachine: (newest-cni-190843)     <pae/>
	I0116 03:32:05.753174 1016909 main.go:141] libmachine: (newest-cni-190843)     
	I0116 03:32:05.753189 1016909 main.go:141] libmachine: (newest-cni-190843)   </features>
	I0116 03:32:05.753201 1016909 main.go:141] libmachine: (newest-cni-190843)   <cpu mode='host-passthrough'>
	I0116 03:32:05.753216 1016909 main.go:141] libmachine: (newest-cni-190843)   
	I0116 03:32:05.753228 1016909 main.go:141] libmachine: (newest-cni-190843)   </cpu>
	I0116 03:32:05.753250 1016909 main.go:141] libmachine: (newest-cni-190843)   <os>
	I0116 03:32:05.753280 1016909 main.go:141] libmachine: (newest-cni-190843)     <type>hvm</type>
	I0116 03:32:05.753291 1016909 main.go:141] libmachine: (newest-cni-190843)     <boot dev='cdrom'/>
	I0116 03:32:05.753299 1016909 main.go:141] libmachine: (newest-cni-190843)     <boot dev='hd'/>
	I0116 03:32:05.753306 1016909 main.go:141] libmachine: (newest-cni-190843)     <bootmenu enable='no'/>
	I0116 03:32:05.753314 1016909 main.go:141] libmachine: (newest-cni-190843)   </os>
	I0116 03:32:05.753320 1016909 main.go:141] libmachine: (newest-cni-190843)   <devices>
	I0116 03:32:05.753329 1016909 main.go:141] libmachine: (newest-cni-190843)     <disk type='file' device='cdrom'>
	I0116 03:32:05.753339 1016909 main.go:141] libmachine: (newest-cni-190843)       <source file='/home/jenkins/minikube-integration/17967-971255/.minikube/machines/newest-cni-190843/boot2docker.iso'/>
	I0116 03:32:05.753348 1016909 main.go:141] libmachine: (newest-cni-190843)       <target dev='hdc' bus='scsi'/>
	I0116 03:32:05.753376 1016909 main.go:141] libmachine: (newest-cni-190843)       <readonly/>
	I0116 03:32:05.753394 1016909 main.go:141] libmachine: (newest-cni-190843)     </disk>
	I0116 03:32:05.753402 1016909 main.go:141] libmachine: (newest-cni-190843)     <disk type='file' device='disk'>
	I0116 03:32:05.753409 1016909 main.go:141] libmachine: (newest-cni-190843)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0116 03:32:05.753429 1016909 main.go:141] libmachine: (newest-cni-190843)       <source file='/home/jenkins/minikube-integration/17967-971255/.minikube/machines/newest-cni-190843/newest-cni-190843.rawdisk'/>
	I0116 03:32:05.753437 1016909 main.go:141] libmachine: (newest-cni-190843)       <target dev='hda' bus='virtio'/>
	I0116 03:32:05.753444 1016909 main.go:141] libmachine: (newest-cni-190843)     </disk>
	I0116 03:32:05.753452 1016909 main.go:141] libmachine: (newest-cni-190843)     <interface type='network'>
	I0116 03:32:05.753459 1016909 main.go:141] libmachine: (newest-cni-190843)       <source network='mk-newest-cni-190843'/>
	I0116 03:32:05.753467 1016909 main.go:141] libmachine: (newest-cni-190843)       <model type='virtio'/>
	I0116 03:32:05.753473 1016909 main.go:141] libmachine: (newest-cni-190843)     </interface>
	I0116 03:32:05.753483 1016909 main.go:141] libmachine: (newest-cni-190843)     <interface type='network'>
	I0116 03:32:05.753492 1016909 main.go:141] libmachine: (newest-cni-190843)       <source network='default'/>
	I0116 03:32:05.753498 1016909 main.go:141] libmachine: (newest-cni-190843)       <model type='virtio'/>
	I0116 03:32:05.753507 1016909 main.go:141] libmachine: (newest-cni-190843)     </interface>
	I0116 03:32:05.753512 1016909 main.go:141] libmachine: (newest-cni-190843)     <serial type='pty'>
	I0116 03:32:05.753521 1016909 main.go:141] libmachine: (newest-cni-190843)       <target port='0'/>
	I0116 03:32:05.753527 1016909 main.go:141] libmachine: (newest-cni-190843)     </serial>
	I0116 03:32:05.753535 1016909 main.go:141] libmachine: (newest-cni-190843)     <console type='pty'>
	I0116 03:32:05.753541 1016909 main.go:141] libmachine: (newest-cni-190843)       <target type='serial' port='0'/>
	I0116 03:32:05.753549 1016909 main.go:141] libmachine: (newest-cni-190843)     </console>
	I0116 03:32:05.753558 1016909 main.go:141] libmachine: (newest-cni-190843)     <rng model='virtio'>
	I0116 03:32:05.753576 1016909 main.go:141] libmachine: (newest-cni-190843)       <backend model='random'>/dev/random</backend>
	I0116 03:32:05.753584 1016909 main.go:141] libmachine: (newest-cni-190843)     </rng>
	I0116 03:32:05.753590 1016909 main.go:141] libmachine: (newest-cni-190843)     
	I0116 03:32:05.753595 1016909 main.go:141] libmachine: (newest-cni-190843)     
	I0116 03:32:05.753604 1016909 main.go:141] libmachine: (newest-cni-190843)   </devices>
	I0116 03:32:05.753609 1016909 main.go:141] libmachine: (newest-cni-190843) </domain>
	I0116 03:32:05.753619 1016909 main.go:141] libmachine: (newest-cni-190843) 
	I0116 03:32:05.758676 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | domain newest-cni-190843 has defined MAC address 52:54:00:ed:fa:1d in network default
	I0116 03:32:05.759420 1016909 main.go:141] libmachine: (newest-cni-190843) Ensuring networks are active...
	I0116 03:32:05.759449 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | domain newest-cni-190843 has defined MAC address 52:54:00:b0:40:c6 in network mk-newest-cni-190843
	I0116 03:32:05.760711 1016909 main.go:141] libmachine: (newest-cni-190843) Ensuring network default is active
	I0116 03:32:05.761076 1016909 main.go:141] libmachine: (newest-cni-190843) Ensuring network mk-newest-cni-190843 is active
	I0116 03:32:05.761857 1016909 main.go:141] libmachine: (newest-cni-190843) Getting domain xml...
	I0116 03:32:05.762956 1016909 main.go:141] libmachine: (newest-cni-190843) Creating domain...
	I0116 03:32:07.062896 1016909 main.go:141] libmachine: (newest-cni-190843) Waiting to get IP...
	I0116 03:32:07.063781 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | domain newest-cni-190843 has defined MAC address 52:54:00:b0:40:c6 in network mk-newest-cni-190843
	I0116 03:32:07.064415 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | unable to find current IP address of domain newest-cni-190843 in network mk-newest-cni-190843
	I0116 03:32:07.064542 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | I0116 03:32:07.064443 1016931 retry.go:31] will retry after 251.904581ms: waiting for machine to come up
	I0116 03:32:07.318111 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | domain newest-cni-190843 has defined MAC address 52:54:00:b0:40:c6 in network mk-newest-cni-190843
	I0116 03:32:07.318712 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | unable to find current IP address of domain newest-cni-190843 in network mk-newest-cni-190843
	I0116 03:32:07.318749 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | I0116 03:32:07.318683 1016931 retry.go:31] will retry after 236.499042ms: waiting for machine to come up
	I0116 03:32:07.557170 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | domain newest-cni-190843 has defined MAC address 52:54:00:b0:40:c6 in network mk-newest-cni-190843
	I0116 03:32:07.557684 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | unable to find current IP address of domain newest-cni-190843 in network mk-newest-cni-190843
	I0116 03:32:07.557720 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | I0116 03:32:07.557632 1016931 retry.go:31] will retry after 394.353611ms: waiting for machine to come up
	I0116 03:32:07.954083 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | domain newest-cni-190843 has defined MAC address 52:54:00:b0:40:c6 in network mk-newest-cni-190843
	I0116 03:32:07.954680 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | unable to find current IP address of domain newest-cni-190843 in network mk-newest-cni-190843
	I0116 03:32:07.954723 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | I0116 03:32:07.954651 1016931 retry.go:31] will retry after 574.464873ms: waiting for machine to come up
	I0116 03:32:08.530742 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | domain newest-cni-190843 has defined MAC address 52:54:00:b0:40:c6 in network mk-newest-cni-190843
	I0116 03:32:08.531295 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | unable to find current IP address of domain newest-cni-190843 in network mk-newest-cni-190843
	I0116 03:32:08.531327 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | I0116 03:32:08.531239 1016931 retry.go:31] will retry after 698.276796ms: waiting for machine to come up
	I0116 03:32:09.231168 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | domain newest-cni-190843 has defined MAC address 52:54:00:b0:40:c6 in network mk-newest-cni-190843
	I0116 03:32:09.231722 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | unable to find current IP address of domain newest-cni-190843 in network mk-newest-cni-190843
	I0116 03:32:09.231752 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | I0116 03:32:09.231668 1016931 retry.go:31] will retry after 853.173834ms: waiting for machine to come up
	I0116 03:32:10.086842 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | domain newest-cni-190843 has defined MAC address 52:54:00:b0:40:c6 in network mk-newest-cni-190843
	I0116 03:32:10.087297 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | unable to find current IP address of domain newest-cni-190843 in network mk-newest-cni-190843
	I0116 03:32:10.087328 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | I0116 03:32:10.087242 1016931 retry.go:31] will retry after 1.154956983s: waiting for machine to come up
	I0116 03:32:11.244096 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | domain newest-cni-190843 has defined MAC address 52:54:00:b0:40:c6 in network mk-newest-cni-190843
	I0116 03:32:11.244636 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | unable to find current IP address of domain newest-cni-190843 in network mk-newest-cni-190843
	I0116 03:32:11.244662 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | I0116 03:32:11.244578 1016931 retry.go:31] will retry after 1.483844581s: waiting for machine to come up
	I0116 03:32:12.730593 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | domain newest-cni-190843 has defined MAC address 52:54:00:b0:40:c6 in network mk-newest-cni-190843
	I0116 03:32:12.731160 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | unable to find current IP address of domain newest-cni-190843 in network mk-newest-cni-190843
	I0116 03:32:12.731201 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | I0116 03:32:12.731083 1016931 retry.go:31] will retry after 1.775480625s: waiting for machine to come up
	I0116 03:32:14.508060 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | domain newest-cni-190843 has defined MAC address 52:54:00:b0:40:c6 in network mk-newest-cni-190843
	I0116 03:32:14.508672 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | unable to find current IP address of domain newest-cni-190843 in network mk-newest-cni-190843
	I0116 03:32:14.508702 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | I0116 03:32:14.508597 1016931 retry.go:31] will retry after 1.937489177s: waiting for machine to come up
	I0116 03:32:16.447960 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | domain newest-cni-190843 has defined MAC address 52:54:00:b0:40:c6 in network mk-newest-cni-190843
	I0116 03:32:16.448548 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | unable to find current IP address of domain newest-cni-190843 in network mk-newest-cni-190843
	I0116 03:32:16.448586 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | I0116 03:32:16.448497 1016931 retry.go:31] will retry after 2.201805443s: waiting for machine to come up
	I0116 03:32:18.652247 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | domain newest-cni-190843 has defined MAC address 52:54:00:b0:40:c6 in network mk-newest-cni-190843
	I0116 03:32:18.652732 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | unable to find current IP address of domain newest-cni-190843 in network mk-newest-cni-190843
	I0116 03:32:18.652766 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | I0116 03:32:18.652652 1016931 retry.go:31] will retry after 2.227539038s: waiting for machine to come up
	I0116 03:32:20.882300 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | domain newest-cni-190843 has defined MAC address 52:54:00:b0:40:c6 in network mk-newest-cni-190843
	I0116 03:32:20.882790 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | unable to find current IP address of domain newest-cni-190843 in network mk-newest-cni-190843
	I0116 03:32:20.882821 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | I0116 03:32:20.882749 1016931 retry.go:31] will retry after 3.22967887s: waiting for machine to come up
	I0116 03:32:24.114862 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | domain newest-cni-190843 has defined MAC address 52:54:00:b0:40:c6 in network mk-newest-cni-190843
	I0116 03:32:24.115394 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | unable to find current IP address of domain newest-cni-190843 in network mk-newest-cni-190843
	I0116 03:32:24.115425 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | I0116 03:32:24.115332 1016931 retry.go:31] will retry after 5.229877584s: waiting for machine to come up
	I0116 03:32:29.348212 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | domain newest-cni-190843 has defined MAC address 52:54:00:b0:40:c6 in network mk-newest-cni-190843
	I0116 03:32:29.348804 1016909 main.go:141] libmachine: (newest-cni-190843) Found IP for machine: 192.168.39.3
	I0116 03:32:29.348825 1016909 main.go:141] libmachine: (newest-cni-190843) Reserving static IP address...
	I0116 03:32:29.348840 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | domain newest-cni-190843 has current primary IP address 192.168.39.3 and MAC address 52:54:00:b0:40:c6 in network mk-newest-cni-190843
	I0116 03:32:29.349177 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | unable to find host DHCP lease matching {name: "newest-cni-190843", mac: "52:54:00:b0:40:c6", ip: "192.168.39.3"} in network mk-newest-cni-190843
	I0116 03:32:29.435959 1016909 main.go:141] libmachine: (newest-cni-190843) Reserved static IP address: 192.168.39.3
	I0116 03:32:29.435998 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | Getting to WaitForSSH function...
	I0116 03:32:29.436008 1016909 main.go:141] libmachine: (newest-cni-190843) Waiting for SSH to be available...
	I0116 03:32:29.439085 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | domain newest-cni-190843 has defined MAC address 52:54:00:b0:40:c6 in network mk-newest-cni-190843
	I0116 03:32:29.439435 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:b0:40:c6", ip: ""} in network mk-newest-cni-190843
	I0116 03:32:29.439471 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | unable to find defined IP address of network mk-newest-cni-190843 interface with MAC address 52:54:00:b0:40:c6
	I0116 03:32:29.439554 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | Using SSH client type: external
	I0116 03:32:29.439585 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | Using SSH private key: /home/jenkins/minikube-integration/17967-971255/.minikube/machines/newest-cni-190843/id_rsa (-rw-------)
	I0116 03:32:29.439618 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17967-971255/.minikube/machines/newest-cni-190843/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0116 03:32:29.439629 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | About to run SSH command:
	I0116 03:32:29.439640 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | exit 0
	I0116 03:32:29.443796 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | SSH cmd err, output: exit status 255: 
	I0116 03:32:29.443834 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0116 03:32:29.443853 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | command : exit 0
	I0116 03:32:29.443864 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | err     : exit status 255
	I0116 03:32:29.443913 1016909 main.go:141] libmachine: (newest-cni-190843) DBG | output  : 
	
	
	==> CRI-O <==
	-- Journal begins at Tue 2024-01-16 03:13:44 UTC, ends at Tue 2024-01-16 03:32:34 UTC. --
	Jan 16 03:32:34 no-preload-934668 crio[726]: time="2024-01-16 03:32:34.513801870Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705375954513779844,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97830,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=e917d6e8-1477-4c77-8483-5d4c684e64e3 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 03:32:34 no-preload-934668 crio[726]: time="2024-01-16 03:32:34.515042700Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=6fdfa0c8-07a1-4d14-b016-800dad8f07b4 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 03:32:34 no-preload-934668 crio[726]: time="2024-01-16 03:32:34.515117978Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=6fdfa0c8-07a1-4d14-b016-800dad8f07b4 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 03:32:34 no-preload-934668 crio[726]: time="2024-01-16 03:32:34.515463711Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:153d0b659aaa8533b9004f341817ab607005ff6cb1680ef017005a00267ffda8,PodSandboxId:5e1acd0ff81ee6e038665fd479b005d53e7e45ccf4a92e1ec7062b6b99e13f63,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:eba727be6ca4938cb3deec2eb6b7767e33995b2083144b23a8bbd138515b0dac,State:CONTAINER_RUNNING,CreatedAt:1705375168510106491,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fr424,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f24ae333-7f56-47bf-b66f-3192010a2cc4,},Annotations:map[string]string{io.kubernetes.container.hash: cadb6afb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a915cd4aa42fcabfc0d11618fe4a45d4b28fcbaec26574a133eea4ed0527d19,PodSandboxId:d358b23fec2261e29f642adb4117f5eccf187110d2c77a463cabf3299fd607d5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1705375168426891052,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb4f416a-8bdc-4a7c-bea1-14015339520b,},Annotations:map[string]string{io.kubernetes.container.hash: 31517ce9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:229310b5851cfb28499024703302e9d76a32a5a7dd165d38163bd4a7e4457058,PodSandboxId:77a79b23b1fd26380ac1719d1b4ed33c19164c2c51391b3a45ae8e0e0a289d6a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1705375168032822263,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-k2kc7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d05aee05-aff7-4500-b656-8f66a3f622d2,},Annotations:map[string]string{io.kubernetes.container.hash: ac1a97cb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63e8de06e9ec3ccb1ecfe2142fb84f65efabdec07da804a74ec55b57f677b021,PodSandboxId:1e49a2fad92b789d84a633329f73074f483688d021d3d4d59887006813e23f68,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:a9aa934e24e72ecc1829948966f500cf16b2faa6b63de15f1b6de03cc074812f,State:CONTAINER_RUNNING,CreatedAt:1705375144408232398,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-934668,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
e74975f6aa220beab1f11fbcbde0a11f,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2abc1d37662ffc8c141014651481ef2b375ac1c303eff20d0fcb56cd604a2f8f,PodSandboxId:dce66905a9839c8df3b9dd726e3af6a9ea2276db74921a535ad08af467e1490a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1705375144321629373,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-934668,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6fee5fb5b6c14085db3a33ab69a90c8d,},Annotations:map
[string]string{io.kubernetes.container.hash: 65f767d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2403bf8a85e789f4307c4ad0dc429903a20a496b9a9e4ee152f4fb45edeaf5e,PodSandboxId:510114ba9343ad2ca2f13a17994a673df661cc041523ab07e43ef45b0282909d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1d6d10016794014c57a966c3c40b351e068202efa97e3fec2d3387198e0cbad0,State:CONTAINER_RUNNING,CreatedAt:1705375143630227499,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-934668,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e82d7d68020f66f8fa75a00b7e2c51a,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 14cdaf8c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:997be6a446a806235150d69a5095c196d7a47ada007ba33985478f573234106d,PodSandboxId:a7f0acc70111402b8b83254c2bea9efbc0052501981735a79f2cfed30d31ff94,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:a5352bf791d1f99a5175e68039fe9bf100ca3dfd2400901a3a5e9bf0c8b03203,State:CONTAINER_RUNNING,CreatedAt:1705375143537262799,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-934668,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d96e594709af7277b638ca1d28bf317,},A
nnotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=6fdfa0c8-07a1-4d14-b016-800dad8f07b4 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 03:32:34 no-preload-934668 crio[726]: time="2024-01-16 03:32:34.572905798Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=151179c4-13b3-4164-9a5b-733c4f2c53f5 name=/runtime.v1.RuntimeService/Version
	Jan 16 03:32:34 no-preload-934668 crio[726]: time="2024-01-16 03:32:34.572975741Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=151179c4-13b3-4164-9a5b-733c4f2c53f5 name=/runtime.v1.RuntimeService/Version
	Jan 16 03:32:34 no-preload-934668 crio[726]: time="2024-01-16 03:32:34.574277249Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=96db2b2b-40ec-41c2-bfc6-1bb61eada29c name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 03:32:34 no-preload-934668 crio[726]: time="2024-01-16 03:32:34.574660384Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705375954574645296,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97830,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=96db2b2b-40ec-41c2-bfc6-1bb61eada29c name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 03:32:34 no-preload-934668 crio[726]: time="2024-01-16 03:32:34.575496186Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=96fe354f-19c9-4bcd-adce-e1faefed5410 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 03:32:34 no-preload-934668 crio[726]: time="2024-01-16 03:32:34.575539980Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=96fe354f-19c9-4bcd-adce-e1faefed5410 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 03:32:34 no-preload-934668 crio[726]: time="2024-01-16 03:32:34.575704106Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:153d0b659aaa8533b9004f341817ab607005ff6cb1680ef017005a00267ffda8,PodSandboxId:5e1acd0ff81ee6e038665fd479b005d53e7e45ccf4a92e1ec7062b6b99e13f63,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:eba727be6ca4938cb3deec2eb6b7767e33995b2083144b23a8bbd138515b0dac,State:CONTAINER_RUNNING,CreatedAt:1705375168510106491,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fr424,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f24ae333-7f56-47bf-b66f-3192010a2cc4,},Annotations:map[string]string{io.kubernetes.container.hash: cadb6afb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a915cd4aa42fcabfc0d11618fe4a45d4b28fcbaec26574a133eea4ed0527d19,PodSandboxId:d358b23fec2261e29f642adb4117f5eccf187110d2c77a463cabf3299fd607d5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1705375168426891052,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb4f416a-8bdc-4a7c-bea1-14015339520b,},Annotations:map[string]string{io.kubernetes.container.hash: 31517ce9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:229310b5851cfb28499024703302e9d76a32a5a7dd165d38163bd4a7e4457058,PodSandboxId:77a79b23b1fd26380ac1719d1b4ed33c19164c2c51391b3a45ae8e0e0a289d6a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1705375168032822263,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-k2kc7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d05aee05-aff7-4500-b656-8f66a3f622d2,},Annotations:map[string]string{io.kubernetes.container.hash: ac1a97cb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63e8de06e9ec3ccb1ecfe2142fb84f65efabdec07da804a74ec55b57f677b021,PodSandboxId:1e49a2fad92b789d84a633329f73074f483688d021d3d4d59887006813e23f68,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:a9aa934e24e72ecc1829948966f500cf16b2faa6b63de15f1b6de03cc074812f,State:CONTAINER_RUNNING,CreatedAt:1705375144408232398,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-934668,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
e74975f6aa220beab1f11fbcbde0a11f,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2abc1d37662ffc8c141014651481ef2b375ac1c303eff20d0fcb56cd604a2f8f,PodSandboxId:dce66905a9839c8df3b9dd726e3af6a9ea2276db74921a535ad08af467e1490a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1705375144321629373,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-934668,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6fee5fb5b6c14085db3a33ab69a90c8d,},Annotations:map
[string]string{io.kubernetes.container.hash: 65f767d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2403bf8a85e789f4307c4ad0dc429903a20a496b9a9e4ee152f4fb45edeaf5e,PodSandboxId:510114ba9343ad2ca2f13a17994a673df661cc041523ab07e43ef45b0282909d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1d6d10016794014c57a966c3c40b351e068202efa97e3fec2d3387198e0cbad0,State:CONTAINER_RUNNING,CreatedAt:1705375143630227499,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-934668,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e82d7d68020f66f8fa75a00b7e2c51a,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 14cdaf8c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:997be6a446a806235150d69a5095c196d7a47ada007ba33985478f573234106d,PodSandboxId:a7f0acc70111402b8b83254c2bea9efbc0052501981735a79f2cfed30d31ff94,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:a5352bf791d1f99a5175e68039fe9bf100ca3dfd2400901a3a5e9bf0c8b03203,State:CONTAINER_RUNNING,CreatedAt:1705375143537262799,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-934668,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d96e594709af7277b638ca1d28bf317,},A
nnotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=96fe354f-19c9-4bcd-adce-e1faefed5410 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 03:32:34 no-preload-934668 crio[726]: time="2024-01-16 03:32:34.622413033Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=db97c43e-d0c5-4ad8-8b03-da7ff1a52f7b name=/runtime.v1.RuntimeService/Version
	Jan 16 03:32:34 no-preload-934668 crio[726]: time="2024-01-16 03:32:34.622471256Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=db97c43e-d0c5-4ad8-8b03-da7ff1a52f7b name=/runtime.v1.RuntimeService/Version
	Jan 16 03:32:34 no-preload-934668 crio[726]: time="2024-01-16 03:32:34.624020606Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=6577be36-bd0b-43b2-adf6-d326314bf0de name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 03:32:34 no-preload-934668 crio[726]: time="2024-01-16 03:32:34.624552873Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705375954624539301,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97830,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=6577be36-bd0b-43b2-adf6-d326314bf0de name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 03:32:34 no-preload-934668 crio[726]: time="2024-01-16 03:32:34.625205828Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=7b180c2d-8bd8-4628-96de-107c77c1bb54 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 03:32:34 no-preload-934668 crio[726]: time="2024-01-16 03:32:34.625253109Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=7b180c2d-8bd8-4628-96de-107c77c1bb54 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 03:32:34 no-preload-934668 crio[726]: time="2024-01-16 03:32:34.625485517Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:153d0b659aaa8533b9004f341817ab607005ff6cb1680ef017005a00267ffda8,PodSandboxId:5e1acd0ff81ee6e038665fd479b005d53e7e45ccf4a92e1ec7062b6b99e13f63,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:eba727be6ca4938cb3deec2eb6b7767e33995b2083144b23a8bbd138515b0dac,State:CONTAINER_RUNNING,CreatedAt:1705375168510106491,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fr424,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f24ae333-7f56-47bf-b66f-3192010a2cc4,},Annotations:map[string]string{io.kubernetes.container.hash: cadb6afb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a915cd4aa42fcabfc0d11618fe4a45d4b28fcbaec26574a133eea4ed0527d19,PodSandboxId:d358b23fec2261e29f642adb4117f5eccf187110d2c77a463cabf3299fd607d5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1705375168426891052,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb4f416a-8bdc-4a7c-bea1-14015339520b,},Annotations:map[string]string{io.kubernetes.container.hash: 31517ce9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:229310b5851cfb28499024703302e9d76a32a5a7dd165d38163bd4a7e4457058,PodSandboxId:77a79b23b1fd26380ac1719d1b4ed33c19164c2c51391b3a45ae8e0e0a289d6a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1705375168032822263,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-k2kc7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d05aee05-aff7-4500-b656-8f66a3f622d2,},Annotations:map[string]string{io.kubernetes.container.hash: ac1a97cb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63e8de06e9ec3ccb1ecfe2142fb84f65efabdec07da804a74ec55b57f677b021,PodSandboxId:1e49a2fad92b789d84a633329f73074f483688d021d3d4d59887006813e23f68,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:a9aa934e24e72ecc1829948966f500cf16b2faa6b63de15f1b6de03cc074812f,State:CONTAINER_RUNNING,CreatedAt:1705375144408232398,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-934668,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
e74975f6aa220beab1f11fbcbde0a11f,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2abc1d37662ffc8c141014651481ef2b375ac1c303eff20d0fcb56cd604a2f8f,PodSandboxId:dce66905a9839c8df3b9dd726e3af6a9ea2276db74921a535ad08af467e1490a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1705375144321629373,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-934668,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6fee5fb5b6c14085db3a33ab69a90c8d,},Annotations:map
[string]string{io.kubernetes.container.hash: 65f767d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2403bf8a85e789f4307c4ad0dc429903a20a496b9a9e4ee152f4fb45edeaf5e,PodSandboxId:510114ba9343ad2ca2f13a17994a673df661cc041523ab07e43ef45b0282909d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1d6d10016794014c57a966c3c40b351e068202efa97e3fec2d3387198e0cbad0,State:CONTAINER_RUNNING,CreatedAt:1705375143630227499,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-934668,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e82d7d68020f66f8fa75a00b7e2c51a,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 14cdaf8c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:997be6a446a806235150d69a5095c196d7a47ada007ba33985478f573234106d,PodSandboxId:a7f0acc70111402b8b83254c2bea9efbc0052501981735a79f2cfed30d31ff94,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:a5352bf791d1f99a5175e68039fe9bf100ca3dfd2400901a3a5e9bf0c8b03203,State:CONTAINER_RUNNING,CreatedAt:1705375143537262799,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-934668,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d96e594709af7277b638ca1d28bf317,},A
nnotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=7b180c2d-8bd8-4628-96de-107c77c1bb54 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 03:32:34 no-preload-934668 crio[726]: time="2024-01-16 03:32:34.674091442Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=ac5b768c-2290-42b0-90b4-6bc329672526 name=/runtime.v1.RuntimeService/Version
	Jan 16 03:32:34 no-preload-934668 crio[726]: time="2024-01-16 03:32:34.674214169Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=ac5b768c-2290-42b0-90b4-6bc329672526 name=/runtime.v1.RuntimeService/Version
	Jan 16 03:32:34 no-preload-934668 crio[726]: time="2024-01-16 03:32:34.675839313Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=da1acbdd-f088-4814-9ef2-da345bdad592 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 03:32:34 no-preload-934668 crio[726]: time="2024-01-16 03:32:34.676487217Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705375954676470338,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97830,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=da1acbdd-f088-4814-9ef2-da345bdad592 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 03:32:34 no-preload-934668 crio[726]: time="2024-01-16 03:32:34.677052063Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=3706074f-1a69-47d6-9148-ab28167c4a86 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 03:32:34 no-preload-934668 crio[726]: time="2024-01-16 03:32:34.677098617Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=3706074f-1a69-47d6-9148-ab28167c4a86 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 03:32:34 no-preload-934668 crio[726]: time="2024-01-16 03:32:34.677268364Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:153d0b659aaa8533b9004f341817ab607005ff6cb1680ef017005a00267ffda8,PodSandboxId:5e1acd0ff81ee6e038665fd479b005d53e7e45ccf4a92e1ec7062b6b99e13f63,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:eba727be6ca4938cb3deec2eb6b7767e33995b2083144b23a8bbd138515b0dac,State:CONTAINER_RUNNING,CreatedAt:1705375168510106491,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fr424,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f24ae333-7f56-47bf-b66f-3192010a2cc4,},Annotations:map[string]string{io.kubernetes.container.hash: cadb6afb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a915cd4aa42fcabfc0d11618fe4a45d4b28fcbaec26574a133eea4ed0527d19,PodSandboxId:d358b23fec2261e29f642adb4117f5eccf187110d2c77a463cabf3299fd607d5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1705375168426891052,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb4f416a-8bdc-4a7c-bea1-14015339520b,},Annotations:map[string]string{io.kubernetes.container.hash: 31517ce9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:229310b5851cfb28499024703302e9d76a32a5a7dd165d38163bd4a7e4457058,PodSandboxId:77a79b23b1fd26380ac1719d1b4ed33c19164c2c51391b3a45ae8e0e0a289d6a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1705375168032822263,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-k2kc7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d05aee05-aff7-4500-b656-8f66a3f622d2,},Annotations:map[string]string{io.kubernetes.container.hash: ac1a97cb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63e8de06e9ec3ccb1ecfe2142fb84f65efabdec07da804a74ec55b57f677b021,PodSandboxId:1e49a2fad92b789d84a633329f73074f483688d021d3d4d59887006813e23f68,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:a9aa934e24e72ecc1829948966f500cf16b2faa6b63de15f1b6de03cc074812f,State:CONTAINER_RUNNING,CreatedAt:1705375144408232398,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-934668,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
e74975f6aa220beab1f11fbcbde0a11f,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2abc1d37662ffc8c141014651481ef2b375ac1c303eff20d0fcb56cd604a2f8f,PodSandboxId:dce66905a9839c8df3b9dd726e3af6a9ea2276db74921a535ad08af467e1490a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1705375144321629373,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-934668,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6fee5fb5b6c14085db3a33ab69a90c8d,},Annotations:map
[string]string{io.kubernetes.container.hash: 65f767d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2403bf8a85e789f4307c4ad0dc429903a20a496b9a9e4ee152f4fb45edeaf5e,PodSandboxId:510114ba9343ad2ca2f13a17994a673df661cc041523ab07e43ef45b0282909d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1d6d10016794014c57a966c3c40b351e068202efa97e3fec2d3387198e0cbad0,State:CONTAINER_RUNNING,CreatedAt:1705375143630227499,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-934668,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e82d7d68020f66f8fa75a00b7e2c51a,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 14cdaf8c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:997be6a446a806235150d69a5095c196d7a47ada007ba33985478f573234106d,PodSandboxId:a7f0acc70111402b8b83254c2bea9efbc0052501981735a79f2cfed30d31ff94,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:a5352bf791d1f99a5175e68039fe9bf100ca3dfd2400901a3a5e9bf0c8b03203,State:CONTAINER_RUNNING,CreatedAt:1705375143537262799,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-934668,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d96e594709af7277b638ca1d28bf317,},A
nnotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=3706074f-1a69-47d6-9148-ab28167c4a86 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	153d0b659aaa8       cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834   13 minutes ago      Running             kube-proxy                0                   5e1acd0ff81ee       kube-proxy-fr424
	4a915cd4aa42f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   13 minutes ago      Running             storage-provisioner       0                   d358b23fec226       storage-provisioner
	229310b5851cf       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   13 minutes ago      Running             coredns                   0                   77a79b23b1fd2       coredns-76f75df574-k2kc7
	63e8de06e9ec3       4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210   13 minutes ago      Running             kube-scheduler            2                   1e49a2fad92b7       kube-scheduler-no-preload-934668
	2abc1d37662ff       a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7   13 minutes ago      Running             etcd                      2                   dce66905a9839       etcd-no-preload-934668
	f2403bf8a85e7       bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f   13 minutes ago      Running             kube-apiserver            2                   510114ba9343a       kube-apiserver-no-preload-934668
	997be6a446a80       d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d   13 minutes ago      Running             kube-controller-manager   2                   a7f0acc701114       kube-controller-manager-no-preload-934668
	
	
	==> coredns [229310b5851cfb28499024703302e9d76a32a5a7dd165d38163bd4a7e4457058] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	[INFO] Reloading complete
	[INFO] 127.0.0.1:54381 - 14223 "HINFO IN 3864013163820413067.2923061323766107152. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010376047s
	
	
	==> describe nodes <==
	Name:               no-preload-934668
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-934668
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=880ec6d67bbc7fe8882aaa6543d5a07427c973fd
	                    minikube.k8s.io/name=no-preload-934668
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_16T03_19_12_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Jan 2024 03:19:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-934668
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Jan 2024 03:32:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Jan 2024 03:29:45 +0000   Tue, 16 Jan 2024 03:19:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Jan 2024 03:29:45 +0000   Tue, 16 Jan 2024 03:19:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Jan 2024 03:29:45 +0000   Tue, 16 Jan 2024 03:19:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Jan 2024 03:29:45 +0000   Tue, 16 Jan 2024 03:19:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.29
	  Hostname:    no-preload-934668
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 69003b7a5dc0442ba5117fe3b27f724c
	  System UUID:                69003b7a-5dc0-442b-a511-7fe3b27f724c
	  Boot ID:                    8fb8a0ab-9153-4d95-93af-0adc6a5ad0e7
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.29.0-rc.2
	  Kube-Proxy Version:         v1.29.0-rc.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-76f75df574-k2kc7                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 etcd-no-preload-934668                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 kube-apiserver-no-preload-934668             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-controller-manager-no-preload-934668    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-proxy-fr424                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-no-preload-934668             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 metrics-server-57f55c9bc5-6w2t7              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node no-preload-934668 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node no-preload-934668 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node no-preload-934668 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m                kubelet          Node no-preload-934668 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m                kubelet          Node no-preload-934668 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m                kubelet          Node no-preload-934668 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             13m                kubelet          Node no-preload-934668 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                13m                kubelet          Node no-preload-934668 status is now: NodeReady
	  Normal  RegisteredNode           13m                node-controller  Node no-preload-934668 event: Registered Node no-preload-934668 in Controller
	
	
	==> dmesg <==
	[Jan16 03:13] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.071544] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.021077] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.599303] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.162525] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.595943] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.400645] systemd-fstab-generator[650]: Ignoring "noauto" for root device
	[  +0.135309] systemd-fstab-generator[661]: Ignoring "noauto" for root device
	[  +0.163363] systemd-fstab-generator[674]: Ignoring "noauto" for root device
	[  +0.152234] systemd-fstab-generator[685]: Ignoring "noauto" for root device
	[  +0.246445] systemd-fstab-generator[709]: Ignoring "noauto" for root device
	[Jan16 03:14] systemd-fstab-generator[1339]: Ignoring "noauto" for root device
	[ +20.223374] kauditd_printk_skb: 29 callbacks suppressed
	[Jan16 03:19] systemd-fstab-generator[3951]: Ignoring "noauto" for root device
	[  +9.868384] systemd-fstab-generator[4287]: Ignoring "noauto" for root device
	[ +15.225899] kauditd_printk_skb: 2 callbacks suppressed
	
	
	==> etcd [2abc1d37662ffc8c141014651481ef2b375ac1c303eff20d0fcb56cd604a2f8f] <==
	{"level":"info","ts":"2024-01-16T03:19:06.029894Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-01-16T03:19:06.029956Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-01-16T03:19:06.028922Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4c95c136ec966d0b switched to configuration voters=(5518529360054086923)"}
	{"level":"info","ts":"2024-01-16T03:19:06.030153Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"8a2a27601d7bfa03","local-member-id":"4c95c136ec966d0b","added-peer-id":"4c95c136ec966d0b","added-peer-peer-urls":["https://192.168.50.29:2380"]}
	{"level":"info","ts":"2024-01-16T03:19:06.802861Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4c95c136ec966d0b is starting a new election at term 1"}
	{"level":"info","ts":"2024-01-16T03:19:06.802975Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4c95c136ec966d0b became pre-candidate at term 1"}
	{"level":"info","ts":"2024-01-16T03:19:06.803034Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4c95c136ec966d0b received MsgPreVoteResp from 4c95c136ec966d0b at term 1"}
	{"level":"info","ts":"2024-01-16T03:19:06.803068Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4c95c136ec966d0b became candidate at term 2"}
	{"level":"info","ts":"2024-01-16T03:19:06.803092Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4c95c136ec966d0b received MsgVoteResp from 4c95c136ec966d0b at term 2"}
	{"level":"info","ts":"2024-01-16T03:19:06.80312Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4c95c136ec966d0b became leader at term 2"}
	{"level":"info","ts":"2024-01-16T03:19:06.803145Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 4c95c136ec966d0b elected leader 4c95c136ec966d0b at term 2"}
	{"level":"info","ts":"2024-01-16T03:19:06.804881Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"4c95c136ec966d0b","local-member-attributes":"{Name:no-preload-934668 ClientURLs:[https://192.168.50.29:2379]}","request-path":"/0/members/4c95c136ec966d0b/attributes","cluster-id":"8a2a27601d7bfa03","publish-timeout":"7s"}
	{"level":"info","ts":"2024-01-16T03:19:06.804965Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-16T03:19:06.805029Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-16T03:19:06.805927Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-01-16T03:19:06.805986Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-01-16T03:19:06.805048Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-16T03:19:06.80714Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"8a2a27601d7bfa03","local-member-id":"4c95c136ec966d0b","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-16T03:19:06.807363Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-16T03:19:06.807424Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-16T03:19:06.808159Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.29:2379"}
	{"level":"info","ts":"2024-01-16T03:19:06.809048Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-01-16T03:29:06.850629Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":719}
	{"level":"info","ts":"2024-01-16T03:29:06.854147Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":719,"took":"2.848621ms","hash":3233103274}
	{"level":"info","ts":"2024-01-16T03:29:06.854506Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3233103274,"revision":719,"compact-revision":-1}
	
	
	==> kernel <==
	 03:32:35 up 18 min,  0 users,  load average: 0.10, 0.23, 0.27
	Linux no-preload-934668 5.10.57 #1 SMP Thu Dec 28 22:04:21 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [f2403bf8a85e789f4307c4ad0dc429903a20a496b9a9e4ee152f4fb45edeaf5e] <==
	I0116 03:27:09.387527       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0116 03:29:08.391965       1 handler_proxy.go:93] no RequestInfo found in the context
	E0116 03:29:08.392180       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0116 03:29:09.392857       1 handler_proxy.go:93] no RequestInfo found in the context
	E0116 03:29:09.393015       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0116 03:29:09.393052       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0116 03:29:09.393160       1 handler_proxy.go:93] no RequestInfo found in the context
	E0116 03:29:09.393628       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0116 03:29:09.394955       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0116 03:30:09.394068       1 handler_proxy.go:93] no RequestInfo found in the context
	E0116 03:30:09.394264       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0116 03:30:09.394280       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0116 03:30:09.396067       1 handler_proxy.go:93] no RequestInfo found in the context
	E0116 03:30:09.396202       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0116 03:30:09.396215       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0116 03:32:09.394979       1 handler_proxy.go:93] no RequestInfo found in the context
	E0116 03:32:09.395081       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0116 03:32:09.395096       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0116 03:32:09.397217       1 handler_proxy.go:93] no RequestInfo found in the context
	E0116 03:32:09.397577       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0116 03:32:09.397632       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [997be6a446a806235150d69a5095c196d7a47ada007ba33985478f573234106d] <==
	I0116 03:26:54.976251       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0116 03:27:24.488444       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0116 03:27:24.985655       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0116 03:27:54.496250       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0116 03:27:54.994191       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0116 03:28:24.503447       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0116 03:28:25.006117       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0116 03:28:54.509638       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0116 03:28:55.014978       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0116 03:29:24.517843       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0116 03:29:25.023705       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0116 03:29:54.524781       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0116 03:29:55.033040       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0116 03:30:24.533694       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0116 03:30:25.043787       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0116 03:30:30.082395       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="656.012µs"
	I0116 03:30:45.072047       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="90.539µs"
	E0116 03:30:54.539418       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0116 03:30:55.053548       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0116 03:31:24.545707       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0116 03:31:25.063256       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0116 03:31:54.552196       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0116 03:31:55.072368       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0116 03:32:24.558237       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0116 03:32:25.081965       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [153d0b659aaa8533b9004f341817ab607005ff6cb1680ef017005a00267ffda8] <==
	I0116 03:19:28.781687       1 server_others.go:72] "Using iptables proxy"
	I0116 03:19:28.792975       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.50.29"]
	I0116 03:19:28.842667       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0116 03:19:28.842782       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0116 03:19:28.842818       1 server_others.go:168] "Using iptables Proxier"
	I0116 03:19:28.846138       1 proxier.go:246] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0116 03:19:28.846798       1 server.go:865] "Version info" version="v1.29.0-rc.2"
	I0116 03:19:28.846845       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0116 03:19:28.848067       1 config.go:188] "Starting service config controller"
	I0116 03:19:28.848114       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0116 03:19:28.848146       1 config.go:97] "Starting endpoint slice config controller"
	I0116 03:19:28.848150       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0116 03:19:28.850090       1 config.go:315] "Starting node config controller"
	I0116 03:19:28.850100       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0116 03:19:28.948686       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0116 03:19:28.948713       1 shared_informer.go:318] Caches are synced for service config
	I0116 03:19:28.950441       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [63e8de06e9ec3ccb1ecfe2142fb84f65efabdec07da804a74ec55b57f677b021] <==
	W0116 03:19:09.238462       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0116 03:19:09.238637       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0116 03:19:09.253515       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0116 03:19:09.253613       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0116 03:19:09.291137       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0116 03:19:09.291234       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0116 03:19:09.373961       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0116 03:19:09.374032       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0116 03:19:09.388267       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0116 03:19:09.388558       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0116 03:19:09.388488       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0116 03:19:09.388661       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0116 03:19:09.456917       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0116 03:19:09.457007       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0116 03:19:09.532399       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0116 03:19:09.532515       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0116 03:19:09.543029       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0116 03:19:09.543138       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0116 03:19:09.595768       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0116 03:19:09.595869       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0116 03:19:09.649903       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0116 03:19:09.650002       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0116 03:19:09.672768       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0116 03:19:09.672868       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0116 03:19:12.183439       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Tue 2024-01-16 03:13:44 UTC, ends at Tue 2024-01-16 03:32:35 UTC. --
	Jan 16 03:30:12 no-preload-934668 kubelet[4294]: E0116 03:30:12.122838    4294 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 16 03:30:12 no-preload-934668 kubelet[4294]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 16 03:30:12 no-preload-934668 kubelet[4294]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 16 03:30:12 no-preload-934668 kubelet[4294]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 16 03:30:17 no-preload-934668 kubelet[4294]: E0116 03:30:17.063289    4294 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jan 16 03:30:17 no-preload-934668 kubelet[4294]: E0116 03:30:17.063423    4294 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jan 16 03:30:17 no-preload-934668 kubelet[4294]: E0116 03:30:17.063633    4294 kuberuntime_manager.go:1262] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-k224t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Pro
beHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:Fi
le,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-57f55c9bc5-6w2t7_kube-system(5169514b-c507-4e5e-b607-6806f6e32801): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jan 16 03:30:17 no-preload-934668 kubelet[4294]: E0116 03:30:17.063672    4294 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-57f55c9bc5-6w2t7" podUID="5169514b-c507-4e5e-b607-6806f6e32801"
	Jan 16 03:30:30 no-preload-934668 kubelet[4294]: E0116 03:30:30.052001    4294 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-6w2t7" podUID="5169514b-c507-4e5e-b607-6806f6e32801"
	Jan 16 03:30:45 no-preload-934668 kubelet[4294]: E0116 03:30:45.051492    4294 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-6w2t7" podUID="5169514b-c507-4e5e-b607-6806f6e32801"
	Jan 16 03:31:00 no-preload-934668 kubelet[4294]: E0116 03:31:00.051506    4294 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-6w2t7" podUID="5169514b-c507-4e5e-b607-6806f6e32801"
	Jan 16 03:31:12 no-preload-934668 kubelet[4294]: E0116 03:31:12.121496    4294 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 16 03:31:12 no-preload-934668 kubelet[4294]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 16 03:31:12 no-preload-934668 kubelet[4294]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 16 03:31:12 no-preload-934668 kubelet[4294]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 16 03:31:15 no-preload-934668 kubelet[4294]: E0116 03:31:15.051103    4294 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-6w2t7" podUID="5169514b-c507-4e5e-b607-6806f6e32801"
	Jan 16 03:31:30 no-preload-934668 kubelet[4294]: E0116 03:31:30.051122    4294 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-6w2t7" podUID="5169514b-c507-4e5e-b607-6806f6e32801"
	Jan 16 03:31:42 no-preload-934668 kubelet[4294]: E0116 03:31:42.054410    4294 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-6w2t7" podUID="5169514b-c507-4e5e-b607-6806f6e32801"
	Jan 16 03:31:54 no-preload-934668 kubelet[4294]: E0116 03:31:54.051876    4294 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-6w2t7" podUID="5169514b-c507-4e5e-b607-6806f6e32801"
	Jan 16 03:32:06 no-preload-934668 kubelet[4294]: E0116 03:32:06.053046    4294 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-6w2t7" podUID="5169514b-c507-4e5e-b607-6806f6e32801"
	Jan 16 03:32:12 no-preload-934668 kubelet[4294]: E0116 03:32:12.123610    4294 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 16 03:32:12 no-preload-934668 kubelet[4294]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 16 03:32:12 no-preload-934668 kubelet[4294]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 16 03:32:12 no-preload-934668 kubelet[4294]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 16 03:32:21 no-preload-934668 kubelet[4294]: E0116 03:32:21.050698    4294 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-6w2t7" podUID="5169514b-c507-4e5e-b607-6806f6e32801"
	
	
	==> storage-provisioner [4a915cd4aa42fcabfc0d11618fe4a45d4b28fcbaec26574a133eea4ed0527d19] <==
	I0116 03:19:28.659988       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0116 03:19:28.677918       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0116 03:19:28.679675       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0116 03:19:28.689458       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0116 03:19:28.690266       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"25e3e812-73cb-41ab-994b-f0e2128f58ef", APIVersion:"v1", ResourceVersion:"456", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-934668_4ef5a5bf-38a6-4e6f-addf-527dbee8b605 became leader
	I0116 03:19:28.690716       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-934668_4ef5a5bf-38a6-4e6f-addf-527dbee8b605!
	I0116 03:19:28.791905       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-934668_4ef5a5bf-38a6-4e6f-addf-527dbee8b605!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-934668 -n no-preload-934668
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-934668 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-6w2t7
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-934668 describe pod metrics-server-57f55c9bc5-6w2t7
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-934668 describe pod metrics-server-57f55c9bc5-6w2t7: exit status 1 (97.272483ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-6w2t7" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-934668 describe pod metrics-server-57f55c9bc5-6w2t7: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (510.93s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (404.64s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0116 03:27:27.512793  978482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/functional-941139/client.crt: no such file or directory
E0116 03:28:12.495966  978482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/addons-321835/client.crt: no such file or directory
start_stop_delete_test.go:287: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-480663 -n embed-certs-480663
start_stop_delete_test.go:287: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-01-16 03:33:26.505920954 +0000 UTC m=+5584.389747535
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-480663 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-480663 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.9µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-480663 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-480663 -n embed-certs-480663
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-480663 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-480663 logs -n 25: (1.569535993s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p                                                     | default-k8s-diff-port-775571 | jenkins | v1.32.0 | 16 Jan 24 03:03 UTC | 16 Jan 24 03:06 UTC |
	|         | default-k8s-diff-port-775571                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-934668             | no-preload-934668            | jenkins | v1.32.0 | 16 Jan 24 03:05 UTC | 16 Jan 24 03:05 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-934668                                   | no-preload-934668            | jenkins | v1.32.0 | 16 Jan 24 03:05 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-480663            | embed-certs-480663           | jenkins | v1.32.0 | 16 Jan 24 03:05 UTC | 16 Jan 24 03:05 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-480663                                  | embed-certs-480663           | jenkins | v1.32.0 | 16 Jan 24 03:05 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-788237        | old-k8s-version-788237       | jenkins | v1.32.0 | 16 Jan 24 03:05 UTC | 16 Jan 24 03:05 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-788237                              | old-k8s-version-788237       | jenkins | v1.32.0 | 16 Jan 24 03:05 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-775571  | default-k8s-diff-port-775571 | jenkins | v1.32.0 | 16 Jan 24 03:06 UTC | 16 Jan 24 03:06 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-775571 | jenkins | v1.32.0 | 16 Jan 24 03:06 UTC |                     |
	|         | default-k8s-diff-port-775571                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-934668                  | no-preload-934668            | jenkins | v1.32.0 | 16 Jan 24 03:07 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-480663                 | embed-certs-480663           | jenkins | v1.32.0 | 16 Jan 24 03:07 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-934668                                   | no-preload-934668            | jenkins | v1.32.0 | 16 Jan 24 03:07 UTC | 16 Jan 24 03:24 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| start   | -p embed-certs-480663                                  | embed-certs-480663           | jenkins | v1.32.0 | 16 Jan 24 03:07 UTC | 16 Jan 24 03:17 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-788237             | old-k8s-version-788237       | jenkins | v1.32.0 | 16 Jan 24 03:08 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-788237                              | old-k8s-version-788237       | jenkins | v1.32.0 | 16 Jan 24 03:08 UTC | 16 Jan 24 03:20 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-775571       | default-k8s-diff-port-775571 | jenkins | v1.32.0 | 16 Jan 24 03:08 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-775571 | jenkins | v1.32.0 | 16 Jan 24 03:08 UTC | 16 Jan 24 03:23 UTC |
	|         | default-k8s-diff-port-775571                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-788237                              | old-k8s-version-788237       | jenkins | v1.32.0 | 16 Jan 24 03:32 UTC | 16 Jan 24 03:32 UTC |
	| start   | -p newest-cni-190843 --memory=2200 --alsologtostderr   | newest-cni-190843            | jenkins | v1.32.0 | 16 Jan 24 03:32 UTC | 16 Jan 24 03:33 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| delete  | -p no-preload-934668                                   | no-preload-934668            | jenkins | v1.32.0 | 16 Jan 24 03:32 UTC | 16 Jan 24 03:32 UTC |
	| start   | -p auto-278325 --memory=3072                           | auto-278325                  | jenkins | v1.32.0 | 16 Jan 24 03:32 UTC |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-190843             | newest-cni-190843            | jenkins | v1.32.0 | 16 Jan 24 03:33 UTC | 16 Jan 24 03:33 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-190843                                   | newest-cni-190843            | jenkins | v1.32.0 | 16 Jan 24 03:33 UTC | 16 Jan 24 03:33 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-190843                  | newest-cni-190843            | jenkins | v1.32.0 | 16 Jan 24 03:33 UTC | 16 Jan 24 03:33 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-190843 --memory=2200 --alsologtostderr   | newest-cni-190843            | jenkins | v1.32.0 | 16 Jan 24 03:33 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/16 03:33:16
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0116 03:33:16.518112 1017941 out.go:296] Setting OutFile to fd 1 ...
	I0116 03:33:16.518261 1017941 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 03:33:16.518273 1017941 out.go:309] Setting ErrFile to fd 2...
	I0116 03:33:16.518280 1017941 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 03:33:16.518571 1017941 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17967-971255/.minikube/bin
	I0116 03:33:16.519192 1017941 out.go:303] Setting JSON to false
	I0116 03:33:16.520292 1017941 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":15346,"bootTime":1705360651,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1048-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0116 03:33:16.520375 1017941 start.go:138] virtualization: kvm guest
	I0116 03:33:16.522832 1017941 out.go:177] * [newest-cni-190843] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0116 03:33:16.524450 1017941 out.go:177]   - MINIKUBE_LOCATION=17967
	I0116 03:33:16.524468 1017941 notify.go:220] Checking for updates...
	I0116 03:33:16.526043 1017941 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0116 03:33:16.527568 1017941 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17967-971255/kubeconfig
	I0116 03:33:16.529021 1017941 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17967-971255/.minikube
	I0116 03:33:16.530638 1017941 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0116 03:33:16.531940 1017941 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0116 03:33:16.533909 1017941 config.go:182] Loaded profile config "newest-cni-190843": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0116 03:33:16.534361 1017941 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:33:16.534441 1017941 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:33:16.550720 1017941 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46067
	I0116 03:33:16.551169 1017941 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:33:16.551822 1017941 main.go:141] libmachine: Using API Version  1
	I0116 03:33:16.551848 1017941 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:33:16.552255 1017941 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:33:16.552479 1017941 main.go:141] libmachine: (newest-cni-190843) Calling .DriverName
	I0116 03:33:16.552751 1017941 driver.go:392] Setting default libvirt URI to qemu:///system
	I0116 03:33:16.553091 1017941 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:33:16.553157 1017941 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:33:16.569002 1017941 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41085
	I0116 03:33:16.569500 1017941 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:33:16.570071 1017941 main.go:141] libmachine: Using API Version  1
	I0116 03:33:16.570100 1017941 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:33:16.570463 1017941 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:33:16.570696 1017941 main.go:141] libmachine: (newest-cni-190843) Calling .DriverName
	I0116 03:33:16.608820 1017941 out.go:177] * Using the kvm2 driver based on existing profile
	I0116 03:33:16.610301 1017941 start.go:298] selected driver: kvm2
	I0116 03:33:16.610327 1017941 start.go:902] validating driver "kvm2" against &{Name:newest-cni-190843 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-190843 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.3 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_r
eady:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 03:33:16.610467 1017941 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0116 03:33:16.611590 1017941 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0116 03:33:16.611708 1017941 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17967-971255/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0116 03:33:16.631833 1017941 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0116 03:33:16.632279 1017941 start_flags.go:946] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0116 03:33:16.632363 1017941 cni.go:84] Creating CNI manager for ""
	I0116 03:33:16.632383 1017941 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 03:33:16.632401 1017941 start_flags.go:321] config:
	{Name:newest-cni-190843 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-190843 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.3 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> Exposed
Ports:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 03:33:16.632591 1017941 iso.go:125] acquiring lock: {Name:mkc0ce0b4b435c5fb7570521b794e5982bb7bf6d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0116 03:33:16.634748 1017941 out.go:177] * Starting control plane node newest-cni-190843 in cluster newest-cni-190843
	I0116 03:33:16.636186 1017941 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0116 03:33:16.636239 1017941 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17967-971255/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4
	I0116 03:33:16.636256 1017941 cache.go:56] Caching tarball of preloaded images
	I0116 03:33:16.636366 1017941 preload.go:174] Found /home/jenkins/minikube-integration/17967-971255/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0116 03:33:16.636379 1017941 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.2 on crio
	I0116 03:33:16.636511 1017941 profile.go:148] Saving config to /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/newest-cni-190843/config.json ...
	I0116 03:33:16.636729 1017941 start.go:365] acquiring machines lock for newest-cni-190843: {Name:mk61734672272adcae1bdaf20e72828e69d219db Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0116 03:33:16.636792 1017941 start.go:369] acquired machines lock for "newest-cni-190843" in 38.987µs
	I0116 03:33:16.636812 1017941 start.go:96] Skipping create...Using existing machine configuration
	I0116 03:33:16.636820 1017941 fix.go:54] fixHost starting: 
	I0116 03:33:16.637110 1017941 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:33:16.637167 1017941 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:33:16.652939 1017941 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43509
	I0116 03:33:16.653526 1017941 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:33:16.654103 1017941 main.go:141] libmachine: Using API Version  1
	I0116 03:33:16.654133 1017941 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:33:16.654477 1017941 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:33:16.654700 1017941 main.go:141] libmachine: (newest-cni-190843) Calling .DriverName
	I0116 03:33:16.654945 1017941 main.go:141] libmachine: (newest-cni-190843) Calling .GetState
	I0116 03:33:16.656950 1017941 fix.go:102] recreateIfNeeded on newest-cni-190843: state=Stopped err=<nil>
	I0116 03:33:16.656979 1017941 main.go:141] libmachine: (newest-cni-190843) Calling .DriverName
	W0116 03:33:16.657155 1017941 fix.go:128] unexpected machine state, will restart: <nil>
	I0116 03:33:16.659681 1017941 out.go:177] * Restarting existing kvm2 VM for "newest-cni-190843" ...
	I0116 03:33:15.879164 1017511 out.go:204]   - Generating certificates and keys ...
	I0116 03:33:15.879325 1017511 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0116 03:33:15.879410 1017511 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0116 03:33:16.442391 1017511 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0116 03:33:17.343356 1017511 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0116 03:33:17.443653 1017511 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0116 03:33:17.636927 1017511 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0116 03:33:17.868818 1017511 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0116 03:33:17.869211 1017511 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [auto-278325 localhost] and IPs [192.168.50.113 127.0.0.1 ::1]
	I0116 03:33:18.176727 1017511 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0116 03:33:18.177003 1017511 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [auto-278325 localhost] and IPs [192.168.50.113 127.0.0.1 ::1]
	I0116 03:33:18.347148 1017511 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0116 03:33:18.456883 1017511 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0116 03:33:18.522485 1017511 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0116 03:33:18.522841 1017511 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0116 03:33:18.684694 1017511 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0116 03:33:19.044422 1017511 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0116 03:33:19.530624 1017511 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0116 03:33:19.809289 1017511 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0116 03:33:19.810122 1017511 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0116 03:33:19.812538 1017511 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0116 03:33:16.661175 1017941 main.go:141] libmachine: (newest-cni-190843) Calling .Start
	I0116 03:33:16.661427 1017941 main.go:141] libmachine: (newest-cni-190843) Ensuring networks are active...
	I0116 03:33:16.662410 1017941 main.go:141] libmachine: (newest-cni-190843) Ensuring network default is active
	I0116 03:33:16.662731 1017941 main.go:141] libmachine: (newest-cni-190843) Ensuring network mk-newest-cni-190843 is active
	I0116 03:33:16.663128 1017941 main.go:141] libmachine: (newest-cni-190843) Getting domain xml...
	I0116 03:33:16.663939 1017941 main.go:141] libmachine: (newest-cni-190843) Creating domain...
	I0116 03:33:17.972586 1017941 main.go:141] libmachine: (newest-cni-190843) Waiting to get IP...
	I0116 03:33:17.973470 1017941 main.go:141] libmachine: (newest-cni-190843) DBG | domain newest-cni-190843 has defined MAC address 52:54:00:b0:40:c6 in network mk-newest-cni-190843
	I0116 03:33:17.973889 1017941 main.go:141] libmachine: (newest-cni-190843) DBG | unable to find current IP address of domain newest-cni-190843 in network mk-newest-cni-190843
	I0116 03:33:17.973970 1017941 main.go:141] libmachine: (newest-cni-190843) DBG | I0116 03:33:17.973861 1017976 retry.go:31] will retry after 276.086948ms: waiting for machine to come up
	I0116 03:33:18.251706 1017941 main.go:141] libmachine: (newest-cni-190843) DBG | domain newest-cni-190843 has defined MAC address 52:54:00:b0:40:c6 in network mk-newest-cni-190843
	I0116 03:33:18.252253 1017941 main.go:141] libmachine: (newest-cni-190843) DBG | unable to find current IP address of domain newest-cni-190843 in network mk-newest-cni-190843
	I0116 03:33:18.252282 1017941 main.go:141] libmachine: (newest-cni-190843) DBG | I0116 03:33:18.252200 1017976 retry.go:31] will retry after 316.381196ms: waiting for machine to come up
	I0116 03:33:18.569899 1017941 main.go:141] libmachine: (newest-cni-190843) DBG | domain newest-cni-190843 has defined MAC address 52:54:00:b0:40:c6 in network mk-newest-cni-190843
	I0116 03:33:18.570385 1017941 main.go:141] libmachine: (newest-cni-190843) DBG | unable to find current IP address of domain newest-cni-190843 in network mk-newest-cni-190843
	I0116 03:33:18.570424 1017941 main.go:141] libmachine: (newest-cni-190843) DBG | I0116 03:33:18.570316 1017976 retry.go:31] will retry after 350.10473ms: waiting for machine to come up
	I0116 03:33:18.922115 1017941 main.go:141] libmachine: (newest-cni-190843) DBG | domain newest-cni-190843 has defined MAC address 52:54:00:b0:40:c6 in network mk-newest-cni-190843
	I0116 03:33:18.922645 1017941 main.go:141] libmachine: (newest-cni-190843) DBG | unable to find current IP address of domain newest-cni-190843 in network mk-newest-cni-190843
	I0116 03:33:18.922674 1017941 main.go:141] libmachine: (newest-cni-190843) DBG | I0116 03:33:18.922591 1017976 retry.go:31] will retry after 595.023308ms: waiting for machine to come up
	I0116 03:33:19.519220 1017941 main.go:141] libmachine: (newest-cni-190843) DBG | domain newest-cni-190843 has defined MAC address 52:54:00:b0:40:c6 in network mk-newest-cni-190843
	I0116 03:33:19.519742 1017941 main.go:141] libmachine: (newest-cni-190843) DBG | unable to find current IP address of domain newest-cni-190843 in network mk-newest-cni-190843
	I0116 03:33:19.519778 1017941 main.go:141] libmachine: (newest-cni-190843) DBG | I0116 03:33:19.519701 1017976 retry.go:31] will retry after 656.318492ms: waiting for machine to come up
	I0116 03:33:20.177637 1017941 main.go:141] libmachine: (newest-cni-190843) DBG | domain newest-cni-190843 has defined MAC address 52:54:00:b0:40:c6 in network mk-newest-cni-190843
	I0116 03:33:20.178234 1017941 main.go:141] libmachine: (newest-cni-190843) DBG | unable to find current IP address of domain newest-cni-190843 in network mk-newest-cni-190843
	I0116 03:33:20.178266 1017941 main.go:141] libmachine: (newest-cni-190843) DBG | I0116 03:33:20.178161 1017976 retry.go:31] will retry after 849.846997ms: waiting for machine to come up
	I0116 03:33:21.029338 1017941 main.go:141] libmachine: (newest-cni-190843) DBG | domain newest-cni-190843 has defined MAC address 52:54:00:b0:40:c6 in network mk-newest-cni-190843
	I0116 03:33:21.029867 1017941 main.go:141] libmachine: (newest-cni-190843) DBG | unable to find current IP address of domain newest-cni-190843 in network mk-newest-cni-190843
	I0116 03:33:21.029900 1017941 main.go:141] libmachine: (newest-cni-190843) DBG | I0116 03:33:21.029794 1017976 retry.go:31] will retry after 1.184297259s: waiting for machine to come up
	I0116 03:33:19.814597 1017511 out.go:204]   - Booting up control plane ...
	I0116 03:33:19.814721 1017511 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0116 03:33:19.814816 1017511 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0116 03:33:19.815318 1017511 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0116 03:33:19.836056 1017511 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0116 03:33:19.837126 1017511 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0116 03:33:19.837461 1017511 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0116 03:33:19.969906 1017511 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0116 03:33:22.216263 1017941 main.go:141] libmachine: (newest-cni-190843) DBG | domain newest-cni-190843 has defined MAC address 52:54:00:b0:40:c6 in network mk-newest-cni-190843
	I0116 03:33:22.216817 1017941 main.go:141] libmachine: (newest-cni-190843) DBG | unable to find current IP address of domain newest-cni-190843 in network mk-newest-cni-190843
	I0116 03:33:22.216865 1017941 main.go:141] libmachine: (newest-cni-190843) DBG | I0116 03:33:22.216773 1017976 retry.go:31] will retry after 1.326509725s: waiting for machine to come up
	I0116 03:33:23.545571 1017941 main.go:141] libmachine: (newest-cni-190843) DBG | domain newest-cni-190843 has defined MAC address 52:54:00:b0:40:c6 in network mk-newest-cni-190843
	I0116 03:33:23.546186 1017941 main.go:141] libmachine: (newest-cni-190843) DBG | unable to find current IP address of domain newest-cni-190843 in network mk-newest-cni-190843
	I0116 03:33:23.546224 1017941 main.go:141] libmachine: (newest-cni-190843) DBG | I0116 03:33:23.546104 1017976 retry.go:31] will retry after 1.274329786s: waiting for machine to come up
	I0116 03:33:24.822439 1017941 main.go:141] libmachine: (newest-cni-190843) DBG | domain newest-cni-190843 has defined MAC address 52:54:00:b0:40:c6 in network mk-newest-cni-190843
	I0116 03:33:24.822840 1017941 main.go:141] libmachine: (newest-cni-190843) DBG | unable to find current IP address of domain newest-cni-190843 in network mk-newest-cni-190843
	I0116 03:33:24.822869 1017941 main.go:141] libmachine: (newest-cni-190843) DBG | I0116 03:33:24.822812 1017976 retry.go:31] will retry after 2.074845726s: waiting for machine to come up
	
	
	==> CRI-O <==
	-- Journal begins at Tue 2024-01-16 03:12:41 UTC, ends at Tue 2024-01-16 03:33:27 UTC. --
	Jan 16 03:33:27 embed-certs-480663 crio[723]: time="2024-01-16 03:33:27.453733026Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=982e0075-5b27-44fc-9aff-c6564666087a name=/runtime.v1.RuntimeService/Version
	Jan 16 03:33:27 embed-certs-480663 crio[723]: time="2024-01-16 03:33:27.455504628Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=e7e55f08-fd50-4f76-9322-75001dd3a009 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 03:33:27 embed-certs-480663 crio[723]: time="2024-01-16 03:33:27.455904070Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705376007455892152,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=e7e55f08-fd50-4f76-9322-75001dd3a009 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 03:33:27 embed-certs-480663 crio[723]: time="2024-01-16 03:33:27.456625532Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=3973ba42-d5ec-40b2-90df-deb9880496df name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 03:33:27 embed-certs-480663 crio[723]: time="2024-01-16 03:33:27.456689974Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=3973ba42-d5ec-40b2-90df-deb9880496df name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 03:33:27 embed-certs-480663 crio[723]: time="2024-01-16 03:33:27.456900598Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0f37f0f7c73398c72353bd7fa20c656f6c90dffa7c4d73d01f9c6d8804319657,PodSandboxId:5e742497bde0f88f37c8d63edf849ebec1984a658dfa6a132d1242bbf84a0acb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705374826755989756,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da59ff59-869f-48a9-a5c5-c95bb807cbcf,},Annotations:map[string]string{io.kubernetes.container.hash: 427d56c5,io.kubernetes.container.restartCount: 2,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5f11b59c21e49de34360bf58b39d8139d2062e46b02a2d693f3ea0fd10fd13b,PodSandboxId:e1e3ebeead958f5c82e6e4fdf744b4ed10ab4573850eb65ecd70bb6ce8ead286,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1705374804591127540,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b7ad1e22-9448-44d8-aee0-5170d264d3f6,},Annotations:map[string]string{io.kubernetes.container.hash: 9a679b09,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2cc211416aab65f634f78efecbcf662c33e971f6b6f1e0fb77492c1ef8e2cf70,PodSandboxId:a4bbad6c2b2c69570905e1037daed24395fadf12cc9c80716ecda8f08fc1e5b9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1705374803134072660,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-stqh5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: adbcef96-218b-42ed-9daf-72c274be0690,},Annotations:map[string]string{io.kubernetes.container.hash: e0eac2e6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"}
,{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:653a87cc5b4e5bb52f728e222fc7a58f19453b0263453d6c259e81a206ffac76,PodSandboxId:5e742497bde0f88f37c8d63edf849ebec1984a658dfa6a132d1242bbf84a0acb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1705374795550747304,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system
,io.kubernetes.pod.uid: da59ff59-869f-48a9-a5c5-c95bb807cbcf,},Annotations:map[string]string{io.kubernetes.container.hash: 427d56c5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da3ca3a9cda0a5d7ff284cd8e9e069e0ef17c913570e52561f8c7cb8be285047,PodSandboxId:9ff79d096f4801dbce33572a413be54f3c99636c38dd2f99bdf17af35cd89ac9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1705374795459731629,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j4786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aabb98a7-fe
55-4105-a5d2-c1e312464107,},Annotations:map[string]string{io.kubernetes.container.hash: aa3b13c8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab456031061355e2ec86ee36ff52b4245b5ccd2fd1f87da0318e9bbd4ca512e8,PodSandboxId:29324fa0b0c09f9749f68778cd4da177bb3207dad8e538cb88511f7b754b8a8d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1705374789321061954,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-480663,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ab7aa1bd8c13dd11
2e2029a6666906b,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36288d0c42d12139e9355a5d562d600f6065e59336e28b57558b0bf5ea3f0618,PodSandboxId:dad887fa7ce4c4d730ad4ad07900ef244bee16de0c9abab8cc98d3809bc84130,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1705374788776726935,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-480663,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b4972d069ffd7fa5b114fbba2bb2c59,},Annotations:map[string]string{io
.kubernetes.container.hash: bc32c30a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42d452ff0268fa05f5c5b20a7332caca85b7aea961642568ec84158f105b568f,PodSandboxId:9c0eabefa8e5bad7f1757d12ef76455a94e8e5a7603524e869b455b5b746f748,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1705374788574902731,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-480663,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70760e2d8cd956052088e57297a3e675,},Annotations:map[string]string{io.kubernete
s.container.hash: 9057951a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f75f023773154fd3722c861e7494aad7dd9f361e17a059735fef935507a94994,PodSandboxId:237d387addb2c96c71300d833fdc618b3cf0f45ce5dbc34e4a9f5aab7ca9cca0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1705374788509555694,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-480663,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5583cf18e2fbf50799cf889aa6297f90,},Annotations:map[
string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=3973ba42-d5ec-40b2-90df-deb9880496df name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 03:33:27 embed-certs-480663 crio[723]: time="2024-01-16 03:33:27.504432948Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=d837e211-3dd1-463e-b33d-64e3c74c26e4 name=/runtime.v1.RuntimeService/Version
	Jan 16 03:33:27 embed-certs-480663 crio[723]: time="2024-01-16 03:33:27.504515809Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=d837e211-3dd1-463e-b33d-64e3c74c26e4 name=/runtime.v1.RuntimeService/Version
	Jan 16 03:33:27 embed-certs-480663 crio[723]: time="2024-01-16 03:33:27.507644955Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=a9b83b8f-3ef2-4764-9397-59d35cf200c1 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jan 16 03:33:27 embed-certs-480663 crio[723]: time="2024-01-16 03:33:27.508075071Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:e1e3ebeead958f5c82e6e4fdf744b4ed10ab4573850eb65ecd70bb6ce8ead286,Metadata:&PodSandboxMetadata{Name:busybox,Uid:b7ad1e22-9448-44d8-aee0-5170d264d3f6,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1705374802544284917,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b7ad1e22-9448-44d8-aee0-5170d264d3f6,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-16T03:13:14.464953055Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a4bbad6c2b2c69570905e1037daed24395fadf12cc9c80716ecda8f08fc1e5b9,Metadata:&PodSandboxMetadata{Name:coredns-5dd5756b68-stqh5,Uid:adbcef96-218b-42ed-9daf-72c274be0690,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1705374802453713
639,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5dd5756b68-stqh5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: adbcef96-218b-42ed-9daf-72c274be0690,k8s-app: kube-dns,pod-template-hash: 5dd5756b68,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-16T03:13:14.464949967Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:0ec4898019abcd72dea9a3d9c22e8d4d45cf71da7024dbbf687dd034c49cc500,Metadata:&PodSandboxMetadata{Name:metrics-server-57f55c9bc5-7d2fh,Uid:512cf579-f335-4995-8721-74bb84da776e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1705374799547531192,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-57f55c9bc5-7d2fh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 512cf579-f335-4995-8721-74bb84da776e,k8s-app: metrics-server,pod-template-hash: 57f55c9bc5,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-16T03:13:14.
464948850Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9ff79d096f4801dbce33572a413be54f3c99636c38dd2f99bdf17af35cd89ac9,Metadata:&PodSandboxMetadata{Name:kube-proxy-j4786,Uid:aabb98a7-fe55-4105-a5d2-c1e312464107,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1705374794823105706,Labels:map[string]string{controller-revision-hash: 8486c7d9cd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-j4786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aabb98a7-fe55-4105-a5d2-c1e312464107,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-16T03:13:14.464947551Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5e742497bde0f88f37c8d63edf849ebec1984a658dfa6a132d1242bbf84a0acb,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:da59ff59-869f-48a9-a5c5-c95bb807cbcf,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1705374794814554667,Labels:map[string
]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da59ff59-869f-48a9-a5c5-c95bb807cbcf,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.i
o/config.seen: 2024-01-16T03:13:14.464952168Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:29324fa0b0c09f9749f68778cd4da177bb3207dad8e538cb88511f7b754b8a8d,Metadata:&PodSandboxMetadata{Name:kube-scheduler-embed-certs-480663,Uid:1ab7aa1bd8c13dd112e2029a6666906b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1705374788031281521,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-embed-certs-480663,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ab7aa1bd8c13dd112e2029a6666906b,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 1ab7aa1bd8c13dd112e2029a6666906b,kubernetes.io/config.seen: 2024-01-16T03:13:07.451623370Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:237d387addb2c96c71300d833fdc618b3cf0f45ce5dbc34e4a9f5aab7ca9cca0,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-embed-certs-480663,Uid:5583cf18e2fbf50799cf889aa6297f9
0,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1705374787994562771,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-embed-certs-480663,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5583cf18e2fbf50799cf889aa6297f90,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 5583cf18e2fbf50799cf889aa6297f90,kubernetes.io/config.seen: 2024-01-16T03:13:07.451629809Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:9c0eabefa8e5bad7f1757d12ef76455a94e8e5a7603524e869b455b5b746f748,Metadata:&PodSandboxMetadata{Name:kube-apiserver-embed-certs-480663,Uid:70760e2d8cd956052088e57297a3e675,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1705374787983799921,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-embed-certs-480663,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: 70760e2d8cd956052088e57297a3e675,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.61.150:8443,kubernetes.io/config.hash: 70760e2d8cd956052088e57297a3e675,kubernetes.io/config.seen: 2024-01-16T03:13:07.451628373Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:dad887fa7ce4c4d730ad4ad07900ef244bee16de0c9abab8cc98d3809bc84130,Metadata:&PodSandboxMetadata{Name:etcd-embed-certs-480663,Uid:0b4972d069ffd7fa5b114fbba2bb2c59,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1705374787974699707,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-embed-certs-480663,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b4972d069ffd7fa5b114fbba2bb2c59,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.61.150:2379,kubernetes.io/config.hash: 0b4972d069ffd7fa5b114fbba2
bb2c59,kubernetes.io/config.seen: 2024-01-16T03:13:07.451627183Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=a9b83b8f-3ef2-4764-9397-59d35cf200c1 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jan 16 03:33:27 embed-certs-480663 crio[723]: time="2024-01-16 03:33:27.509017834Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=462cd2d6-e8b7-4cc5-89a1-dbdeb8975b5f name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 03:33:27 embed-certs-480663 crio[723]: time="2024-01-16 03:33:27.509063683Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=462cd2d6-e8b7-4cc5-89a1-dbdeb8975b5f name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 03:33:27 embed-certs-480663 crio[723]: time="2024-01-16 03:33:27.509354263Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0f37f0f7c73398c72353bd7fa20c656f6c90dffa7c4d73d01f9c6d8804319657,PodSandboxId:5e742497bde0f88f37c8d63edf849ebec1984a658dfa6a132d1242bbf84a0acb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705374826755989756,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da59ff59-869f-48a9-a5c5-c95bb807cbcf,},Annotations:map[string]string{io.kubernetes.container.hash: 427d56c5,io.kubernetes.container.restartCount: 2,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5f11b59c21e49de34360bf58b39d8139d2062e46b02a2d693f3ea0fd10fd13b,PodSandboxId:e1e3ebeead958f5c82e6e4fdf744b4ed10ab4573850eb65ecd70bb6ce8ead286,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1705374804591127540,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b7ad1e22-9448-44d8-aee0-5170d264d3f6,},Annotations:map[string]string{io.kubernetes.container.hash: 9a679b09,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2cc211416aab65f634f78efecbcf662c33e971f6b6f1e0fb77492c1ef8e2cf70,PodSandboxId:a4bbad6c2b2c69570905e1037daed24395fadf12cc9c80716ecda8f08fc1e5b9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1705374803134072660,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-stqh5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: adbcef96-218b-42ed-9daf-72c274be0690,},Annotations:map[string]string{io.kubernetes.container.hash: e0eac2e6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"}
,{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da3ca3a9cda0a5d7ff284cd8e9e069e0ef17c913570e52561f8c7cb8be285047,PodSandboxId:9ff79d096f4801dbce33572a413be54f3c99636c38dd2f99bdf17af35cd89ac9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1705374795459731629,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j4786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aabb98a7-
fe55-4105-a5d2-c1e312464107,},Annotations:map[string]string{io.kubernetes.container.hash: aa3b13c8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab456031061355e2ec86ee36ff52b4245b5ccd2fd1f87da0318e9bbd4ca512e8,PodSandboxId:29324fa0b0c09f9749f68778cd4da177bb3207dad8e538cb88511f7b754b8a8d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1705374789321061954,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-480663,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ab7aa1bd8c13dd
112e2029a6666906b,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36288d0c42d12139e9355a5d562d600f6065e59336e28b57558b0bf5ea3f0618,PodSandboxId:dad887fa7ce4c4d730ad4ad07900ef244bee16de0c9abab8cc98d3809bc84130,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1705374788776726935,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-480663,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b4972d069ffd7fa5b114fbba2bb2c59,},Annotations:map[string]string{
io.kubernetes.container.hash: bc32c30a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42d452ff0268fa05f5c5b20a7332caca85b7aea961642568ec84158f105b568f,PodSandboxId:9c0eabefa8e5bad7f1757d12ef76455a94e8e5a7603524e869b455b5b746f748,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1705374788574902731,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-480663,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70760e2d8cd956052088e57297a3e675,},Annotations:map[string]string{io.kuberne
tes.container.hash: 9057951a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f75f023773154fd3722c861e7494aad7dd9f361e17a059735fef935507a94994,PodSandboxId:237d387addb2c96c71300d833fdc618b3cf0f45ce5dbc34e4a9f5aab7ca9cca0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1705374788509555694,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-480663,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5583cf18e2fbf50799cf889aa6297f90,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=462cd2d6-e8b7-4cc5-89a1-dbdeb8975b5f name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 03:33:27 embed-certs-480663 crio[723]: time="2024-01-16 03:33:27.510823847Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=70d8fb67-641f-4889-ad9f-ed9e41fc18d4 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 03:33:27 embed-certs-480663 crio[723]: time="2024-01-16 03:33:27.511417343Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705376007511403045,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=70d8fb67-641f-4889-ad9f-ed9e41fc18d4 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 03:33:27 embed-certs-480663 crio[723]: time="2024-01-16 03:33:27.512443192Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=5f72e32a-9548-4157-8065-52290a52e9e3 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 03:33:27 embed-certs-480663 crio[723]: time="2024-01-16 03:33:27.512511886Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=5f72e32a-9548-4157-8065-52290a52e9e3 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 03:33:27 embed-certs-480663 crio[723]: time="2024-01-16 03:33:27.512750032Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0f37f0f7c73398c72353bd7fa20c656f6c90dffa7c4d73d01f9c6d8804319657,PodSandboxId:5e742497bde0f88f37c8d63edf849ebec1984a658dfa6a132d1242bbf84a0acb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705374826755989756,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da59ff59-869f-48a9-a5c5-c95bb807cbcf,},Annotations:map[string]string{io.kubernetes.container.hash: 427d56c5,io.kubernetes.container.restartCount: 2,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5f11b59c21e49de34360bf58b39d8139d2062e46b02a2d693f3ea0fd10fd13b,PodSandboxId:e1e3ebeead958f5c82e6e4fdf744b4ed10ab4573850eb65ecd70bb6ce8ead286,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1705374804591127540,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b7ad1e22-9448-44d8-aee0-5170d264d3f6,},Annotations:map[string]string{io.kubernetes.container.hash: 9a679b09,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2cc211416aab65f634f78efecbcf662c33e971f6b6f1e0fb77492c1ef8e2cf70,PodSandboxId:a4bbad6c2b2c69570905e1037daed24395fadf12cc9c80716ecda8f08fc1e5b9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1705374803134072660,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-stqh5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: adbcef96-218b-42ed-9daf-72c274be0690,},Annotations:map[string]string{io.kubernetes.container.hash: e0eac2e6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"}
,{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:653a87cc5b4e5bb52f728e222fc7a58f19453b0263453d6c259e81a206ffac76,PodSandboxId:5e742497bde0f88f37c8d63edf849ebec1984a658dfa6a132d1242bbf84a0acb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1705374795550747304,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system
,io.kubernetes.pod.uid: da59ff59-869f-48a9-a5c5-c95bb807cbcf,},Annotations:map[string]string{io.kubernetes.container.hash: 427d56c5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da3ca3a9cda0a5d7ff284cd8e9e069e0ef17c913570e52561f8c7cb8be285047,PodSandboxId:9ff79d096f4801dbce33572a413be54f3c99636c38dd2f99bdf17af35cd89ac9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1705374795459731629,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j4786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aabb98a7-fe
55-4105-a5d2-c1e312464107,},Annotations:map[string]string{io.kubernetes.container.hash: aa3b13c8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab456031061355e2ec86ee36ff52b4245b5ccd2fd1f87da0318e9bbd4ca512e8,PodSandboxId:29324fa0b0c09f9749f68778cd4da177bb3207dad8e538cb88511f7b754b8a8d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1705374789321061954,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-480663,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ab7aa1bd8c13dd11
2e2029a6666906b,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36288d0c42d12139e9355a5d562d600f6065e59336e28b57558b0bf5ea3f0618,PodSandboxId:dad887fa7ce4c4d730ad4ad07900ef244bee16de0c9abab8cc98d3809bc84130,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1705374788776726935,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-480663,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b4972d069ffd7fa5b114fbba2bb2c59,},Annotations:map[string]string{io
.kubernetes.container.hash: bc32c30a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42d452ff0268fa05f5c5b20a7332caca85b7aea961642568ec84158f105b568f,PodSandboxId:9c0eabefa8e5bad7f1757d12ef76455a94e8e5a7603524e869b455b5b746f748,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1705374788574902731,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-480663,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70760e2d8cd956052088e57297a3e675,},Annotations:map[string]string{io.kubernete
s.container.hash: 9057951a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f75f023773154fd3722c861e7494aad7dd9f361e17a059735fef935507a94994,PodSandboxId:237d387addb2c96c71300d833fdc618b3cf0f45ce5dbc34e4a9f5aab7ca9cca0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1705374788509555694,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-480663,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5583cf18e2fbf50799cf889aa6297f90,},Annotations:map[
string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=5f72e32a-9548-4157-8065-52290a52e9e3 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 03:33:27 embed-certs-480663 crio[723]: time="2024-01-16 03:33:27.553269864Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=8faf93f4-01c4-4fb6-81be-f4a1cac336a1 name=/runtime.v1.RuntimeService/Version
	Jan 16 03:33:27 embed-certs-480663 crio[723]: time="2024-01-16 03:33:27.553400443Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=8faf93f4-01c4-4fb6-81be-f4a1cac336a1 name=/runtime.v1.RuntimeService/Version
	Jan 16 03:33:27 embed-certs-480663 crio[723]: time="2024-01-16 03:33:27.555251499Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=f9778a35-eccf-46d9-b7c8-5ef766542d8e name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 03:33:27 embed-certs-480663 crio[723]: time="2024-01-16 03:33:27.555840864Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705376007555817232,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=f9778a35-eccf-46d9-b7c8-5ef766542d8e name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 03:33:27 embed-certs-480663 crio[723]: time="2024-01-16 03:33:27.556919102Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=ca6ca914-af9e-4d90-acf1-cc7a06b19964 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 03:33:27 embed-certs-480663 crio[723]: time="2024-01-16 03:33:27.556986215Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=ca6ca914-af9e-4d90-acf1-cc7a06b19964 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 03:33:27 embed-certs-480663 crio[723]: time="2024-01-16 03:33:27.557327042Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0f37f0f7c73398c72353bd7fa20c656f6c90dffa7c4d73d01f9c6d8804319657,PodSandboxId:5e742497bde0f88f37c8d63edf849ebec1984a658dfa6a132d1242bbf84a0acb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705374826755989756,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da59ff59-869f-48a9-a5c5-c95bb807cbcf,},Annotations:map[string]string{io.kubernetes.container.hash: 427d56c5,io.kubernetes.container.restartCount: 2,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5f11b59c21e49de34360bf58b39d8139d2062e46b02a2d693f3ea0fd10fd13b,PodSandboxId:e1e3ebeead958f5c82e6e4fdf744b4ed10ab4573850eb65ecd70bb6ce8ead286,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1705374804591127540,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b7ad1e22-9448-44d8-aee0-5170d264d3f6,},Annotations:map[string]string{io.kubernetes.container.hash: 9a679b09,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2cc211416aab65f634f78efecbcf662c33e971f6b6f1e0fb77492c1ef8e2cf70,PodSandboxId:a4bbad6c2b2c69570905e1037daed24395fadf12cc9c80716ecda8f08fc1e5b9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1705374803134072660,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-stqh5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: adbcef96-218b-42ed-9daf-72c274be0690,},Annotations:map[string]string{io.kubernetes.container.hash: e0eac2e6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"}
,{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:653a87cc5b4e5bb52f728e222fc7a58f19453b0263453d6c259e81a206ffac76,PodSandboxId:5e742497bde0f88f37c8d63edf849ebec1984a658dfa6a132d1242bbf84a0acb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1705374795550747304,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system
,io.kubernetes.pod.uid: da59ff59-869f-48a9-a5c5-c95bb807cbcf,},Annotations:map[string]string{io.kubernetes.container.hash: 427d56c5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da3ca3a9cda0a5d7ff284cd8e9e069e0ef17c913570e52561f8c7cb8be285047,PodSandboxId:9ff79d096f4801dbce33572a413be54f3c99636c38dd2f99bdf17af35cd89ac9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1705374795459731629,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j4786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aabb98a7-fe
55-4105-a5d2-c1e312464107,},Annotations:map[string]string{io.kubernetes.container.hash: aa3b13c8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab456031061355e2ec86ee36ff52b4245b5ccd2fd1f87da0318e9bbd4ca512e8,PodSandboxId:29324fa0b0c09f9749f68778cd4da177bb3207dad8e538cb88511f7b754b8a8d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1705374789321061954,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-480663,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ab7aa1bd8c13dd11
2e2029a6666906b,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36288d0c42d12139e9355a5d562d600f6065e59336e28b57558b0bf5ea3f0618,PodSandboxId:dad887fa7ce4c4d730ad4ad07900ef244bee16de0c9abab8cc98d3809bc84130,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1705374788776726935,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-480663,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b4972d069ffd7fa5b114fbba2bb2c59,},Annotations:map[string]string{io
.kubernetes.container.hash: bc32c30a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42d452ff0268fa05f5c5b20a7332caca85b7aea961642568ec84158f105b568f,PodSandboxId:9c0eabefa8e5bad7f1757d12ef76455a94e8e5a7603524e869b455b5b746f748,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1705374788574902731,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-480663,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70760e2d8cd956052088e57297a3e675,},Annotations:map[string]string{io.kubernete
s.container.hash: 9057951a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f75f023773154fd3722c861e7494aad7dd9f361e17a059735fef935507a94994,PodSandboxId:237d387addb2c96c71300d833fdc618b3cf0f45ce5dbc34e4a9f5aab7ca9cca0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1705374788509555694,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-480663,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5583cf18e2fbf50799cf889aa6297f90,},Annotations:map[
string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=ca6ca914-af9e-4d90-acf1-cc7a06b19964 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	0f37f0f7c7339       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      19 minutes ago      Running             storage-provisioner       2                   5e742497bde0f       storage-provisioner
	f5f11b59c21e4       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   20 minutes ago      Running             busybox                   1                   e1e3ebeead958       busybox
	2cc211416aab6       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      20 minutes ago      Running             coredns                   1                   a4bbad6c2b2c6       coredns-5dd5756b68-stqh5
	653a87cc5b4e5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      20 minutes ago      Exited              storage-provisioner       1                   5e742497bde0f       storage-provisioner
	da3ca3a9cda0a       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      20 minutes ago      Running             kube-proxy                1                   9ff79d096f480       kube-proxy-j4786
	ab45603106135       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      20 minutes ago      Running             kube-scheduler            1                   29324fa0b0c09       kube-scheduler-embed-certs-480663
	36288d0c42d12       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      20 minutes ago      Running             etcd                      1                   dad887fa7ce4c       etcd-embed-certs-480663
	42d452ff0268f       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      20 minutes ago      Running             kube-apiserver            1                   9c0eabefa8e5b       kube-apiserver-embed-certs-480663
	f75f023773154       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      20 minutes ago      Running             kube-controller-manager   1                   237d387addb2c       kube-controller-manager-embed-certs-480663
	
	
	==> coredns [2cc211416aab65f634f78efecbcf662c33e971f6b6f1e0fb77492c1ef8e2cf70] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:55407 - 52631 "HINFO IN 341389483529151724.1810516983307257500. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.010011316s
	
	
	==> describe nodes <==
	Name:               embed-certs-480663
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-480663
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=880ec6d67bbc7fe8882aaa6543d5a07427c973fd
	                    minikube.k8s.io/name=embed-certs-480663
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_16T03_04_54_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Jan 2024 03:04:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-480663
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Jan 2024 03:33:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Jan 2024 03:29:02 +0000   Tue, 16 Jan 2024 03:04:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Jan 2024 03:29:02 +0000   Tue, 16 Jan 2024 03:04:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Jan 2024 03:29:02 +0000   Tue, 16 Jan 2024 03:04:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Jan 2024 03:29:02 +0000   Tue, 16 Jan 2024 03:13:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.150
	  Hostname:    embed-certs-480663
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 907673c0529d4fe7bddee1a62166d776
	  System UUID:                907673c0-529d-4fe7-bdde-e1a62166d776
	  Boot ID:                    ffa04338-2d5a-4308-af70-f8f39809837f
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 coredns-5dd5756b68-stqh5                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     28m
	  kube-system                 etcd-embed-certs-480663                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         28m
	  kube-system                 kube-apiserver-embed-certs-480663             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-controller-manager-embed-certs-480663    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-proxy-j4786                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-scheduler-embed-certs-480663             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 metrics-server-57f55c9bc5-7d2fh               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         28m
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 28m                kube-proxy       
	  Normal  Starting                 20m                kube-proxy       
	  Normal  Starting                 28m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  28m (x8 over 28m)  kubelet          Node embed-certs-480663 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28m (x8 over 28m)  kubelet          Node embed-certs-480663 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     28m (x7 over 28m)  kubelet          Node embed-certs-480663 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  28m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 28m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     28m                kubelet          Node embed-certs-480663 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    28m                kubelet          Node embed-certs-480663 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  28m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                28m                kubelet          Node embed-certs-480663 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  28m                kubelet          Node embed-certs-480663 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           28m                node-controller  Node embed-certs-480663 event: Registered Node embed-certs-480663 in Controller
	  Normal  Starting                 20m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  20m (x8 over 20m)  kubelet          Node embed-certs-480663 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    20m (x8 over 20m)  kubelet          Node embed-certs-480663 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     20m (x7 over 20m)  kubelet          Node embed-certs-480663 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  20m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           20m                node-controller  Node embed-certs-480663 event: Registered Node embed-certs-480663 in Controller
	
	
	==> dmesg <==
	[Jan16 03:12] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.069299] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.402616] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.433360] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.153755] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000025] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.489527] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.501555] systemd-fstab-generator[649]: Ignoring "noauto" for root device
	[  +0.120462] systemd-fstab-generator[660]: Ignoring "noauto" for root device
	[  +0.139757] systemd-fstab-generator[673]: Ignoring "noauto" for root device
	[  +0.139498] systemd-fstab-generator[684]: Ignoring "noauto" for root device
	[  +0.235945] systemd-fstab-generator[708]: Ignoring "noauto" for root device
	[Jan16 03:13] systemd-fstab-generator[924]: Ignoring "noauto" for root device
	[ +15.343598] kauditd_printk_skb: 19 callbacks suppressed
	
	
	==> etcd [36288d0c42d12139e9355a5d562d600f6065e59336e28b57558b0bf5ea3f0618] <==
	{"level":"info","ts":"2024-01-16T03:13:18.722425Z","caller":"traceutil/trace.go:171","msg":"trace[1625719483] transaction","detail":"{read_only:false; response_revision:547; number_of_response:1; }","duration":"132.715386ms","start":"2024-01-16T03:13:18.589688Z","end":"2024-01-16T03:13:18.722403Z","steps":["trace[1625719483] 'process raft request'  (duration: 115.031256ms)","trace[1625719483] 'compare'  (duration: 17.560917ms)"],"step_count":2}
	{"level":"info","ts":"2024-01-16T03:13:18.878708Z","caller":"traceutil/trace.go:171","msg":"trace[1566635712] linearizableReadLoop","detail":"{readStateIndex:581; appliedIndex:580; }","duration":"129.342435ms","start":"2024-01-16T03:13:18.74935Z","end":"2024-01-16T03:13:18.878692Z","steps":["trace[1566635712] 'read index received'  (duration: 107.01544ms)","trace[1566635712] 'applied index is now lower than readState.Index'  (duration: 22.326449ms)"],"step_count":2}
	{"level":"warn","ts":"2024-01-16T03:13:18.87887Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"129.521615ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/embed-certs-480663\" ","response":"range_response_count:1 size:5664"}
	{"level":"info","ts":"2024-01-16T03:13:18.878893Z","caller":"traceutil/trace.go:171","msg":"trace[1686190376] range","detail":"{range_begin:/registry/minions/embed-certs-480663; range_end:; response_count:1; response_revision:548; }","duration":"129.560168ms","start":"2024-01-16T03:13:18.749325Z","end":"2024-01-16T03:13:18.878886Z","steps":["trace[1686190376] 'agreement among raft nodes before linearized reading'  (duration: 129.457787ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-16T03:13:18.879491Z","caller":"traceutil/trace.go:171","msg":"trace[1610609892] transaction","detail":"{read_only:false; response_revision:548; number_of_response:1; }","duration":"148.155244ms","start":"2024-01-16T03:13:18.731319Z","end":"2024-01-16T03:13:18.879474Z","steps":["trace[1610609892] 'process raft request'  (duration: 125.173932ms)","trace[1610609892] 'compare'  (duration: 22.1172ms)"],"step_count":2}
	{"level":"info","ts":"2024-01-16T03:14:04.534781Z","caller":"traceutil/trace.go:171","msg":"trace[771273906] transaction","detail":"{read_only:false; response_revision:610; number_of_response:1; }","duration":"153.761543ms","start":"2024-01-16T03:14:04.380984Z","end":"2024-01-16T03:14:04.534746Z","steps":["trace[771273906] 'process raft request'  (duration: 117.492611ms)","trace[771273906] 'compare'  (duration: 36.031757ms)"],"step_count":2}
	{"level":"warn","ts":"2024-01-16T03:14:04.535093Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"145.446934ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1118"}
	{"level":"info","ts":"2024-01-16T03:14:04.535575Z","caller":"traceutil/trace.go:171","msg":"trace[48173144] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:610; }","duration":"146.089403ms","start":"2024-01-16T03:14:04.38947Z","end":"2024-01-16T03:14:04.535559Z","steps":["trace[48173144] 'agreement among raft nodes before linearized reading'  (duration: 145.36668ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-16T03:14:04.534771Z","caller":"traceutil/trace.go:171","msg":"trace[1414106003] linearizableReadLoop","detail":"{readStateIndex:653; appliedIndex:652; }","duration":"145.234081ms","start":"2024-01-16T03:14:04.389501Z","end":"2024-01-16T03:14:04.534735Z","steps":["trace[1414106003] 'read index received'  (duration: 108.92355ms)","trace[1414106003] 'applied index is now lower than readState.Index'  (duration: 36.309739ms)"],"step_count":2}
	{"level":"info","ts":"2024-01-16T03:14:05.426288Z","caller":"traceutil/trace.go:171","msg":"trace[1112784573] linearizableReadLoop","detail":"{readStateIndex:654; appliedIndex:653; }","duration":"209.581136ms","start":"2024-01-16T03:14:05.21669Z","end":"2024-01-16T03:14:05.426272Z","steps":["trace[1112784573] 'read index received'  (duration: 209.30074ms)","trace[1112784573] 'applied index is now lower than readState.Index'  (duration: 279.639µs)"],"step_count":2}
	{"level":"info","ts":"2024-01-16T03:14:05.426736Z","caller":"traceutil/trace.go:171","msg":"trace[784698471] transaction","detail":"{read_only:false; response_revision:611; number_of_response:1; }","duration":"227.949099ms","start":"2024-01-16T03:14:05.198772Z","end":"2024-01-16T03:14:05.426722Z","steps":["trace[784698471] 'process raft request'  (duration: 227.260693ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-16T03:14:05.426853Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"210.167756ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-01-16T03:14:05.427706Z","caller":"traceutil/trace.go:171","msg":"trace[1316095772] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:611; }","duration":"211.032004ms","start":"2024-01-16T03:14:05.216662Z","end":"2024-01-16T03:14:05.427694Z","steps":["trace[1316095772] 'agreement among raft nodes before linearized reading'  (duration: 210.146643ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-16T03:23:12.356837Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":825}
	{"level":"info","ts":"2024-01-16T03:23:12.359392Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":825,"took":"2.279253ms","hash":3028259112}
	{"level":"info","ts":"2024-01-16T03:23:12.35946Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3028259112,"revision":825,"compact-revision":-1}
	{"level":"info","ts":"2024-01-16T03:28:12.366387Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1067}
	{"level":"info","ts":"2024-01-16T03:28:12.367794Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1067,"took":"1.067512ms","hash":3107173795}
	{"level":"info","ts":"2024-01-16T03:28:12.367881Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3107173795,"revision":1067,"compact-revision":825}
	{"level":"info","ts":"2024-01-16T03:32:41.61174Z","caller":"traceutil/trace.go:171","msg":"trace[1299656119] transaction","detail":"{read_only:false; response_revision:1528; number_of_response:1; }","duration":"274.773343ms","start":"2024-01-16T03:32:41.336932Z","end":"2024-01-16T03:32:41.611705Z","steps":["trace[1299656119] 'process raft request'  (duration: 274.638771ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-16T03:33:12.384521Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1309}
	{"level":"info","ts":"2024-01-16T03:33:12.386377Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1309,"took":"1.568968ms","hash":2424877542}
	{"level":"info","ts":"2024-01-16T03:33:12.386512Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2424877542,"revision":1309,"compact-revision":1067}
	{"level":"info","ts":"2024-01-16T03:33:14.025317Z","caller":"traceutil/trace.go:171","msg":"trace[721285480] transaction","detail":"{read_only:false; response_revision:1554; number_of_response:1; }","duration":"198.934325ms","start":"2024-01-16T03:33:13.826363Z","end":"2024-01-16T03:33:14.025297Z","steps":["trace[721285480] 'process raft request'  (duration: 198.714616ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-16T03:33:15.702568Z","caller":"traceutil/trace.go:171","msg":"trace[533415432] transaction","detail":"{read_only:false; response_revision:1555; number_of_response:1; }","duration":"124.695456ms","start":"2024-01-16T03:33:15.577851Z","end":"2024-01-16T03:33:15.702546Z","steps":["trace[533415432] 'process raft request'  (duration: 60.81662ms)","trace[533415432] 'compare'  (duration: 63.575267ms)"],"step_count":2}
	
	
	==> kernel <==
	 03:33:27 up 20 min,  0 users,  load average: 0.45, 0.27, 0.20
	Linux embed-certs-480663 5.10.57 #1 SMP Thu Dec 28 22:04:21 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [42d452ff0268fa05f5c5b20a7332caca85b7aea961642568ec84158f105b568f] <==
	W0116 03:29:15.251626       1 handler_proxy.go:93] no RequestInfo found in the context
	E0116 03:29:15.251640       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0116 03:29:15.252803       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0116 03:30:14.101032       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0116 03:31:14.100470       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0116 03:31:15.252937       1 handler_proxy.go:93] no RequestInfo found in the context
	E0116 03:31:15.253027       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0116 03:31:15.253039       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0116 03:31:15.253066       1 handler_proxy.go:93] no RequestInfo found in the context
	E0116 03:31:15.253290       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0116 03:31:15.254587       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0116 03:32:14.100726       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0116 03:33:14.101498       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0116 03:33:14.254836       1 handler_proxy.go:93] no RequestInfo found in the context
	E0116 03:33:14.255050       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0116 03:33:14.255722       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0116 03:33:15.256347       1 handler_proxy.go:93] no RequestInfo found in the context
	E0116 03:33:15.256465       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0116 03:33:15.256492       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0116 03:33:15.256605       1 handler_proxy.go:93] no RequestInfo found in the context
	E0116 03:33:15.256719       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0116 03:33:15.257929       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [f75f023773154fd3722c861e7494aad7dd9f361e17a059735fef935507a94994] <==
	I0116 03:27:57.553115       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0116 03:28:26.988462       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0116 03:28:27.562970       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0116 03:28:56.995615       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0116 03:28:57.573974       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0116 03:29:27.001718       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0116 03:29:27.589323       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0116 03:29:32.525331       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="410.105µs"
	I0116 03:29:43.526756       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="123.986µs"
	E0116 03:29:57.009025       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0116 03:29:57.600532       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0116 03:30:27.015408       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0116 03:30:27.608436       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0116 03:30:57.022331       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0116 03:30:57.618653       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0116 03:31:27.029074       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0116 03:31:27.629320       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0116 03:31:57.034951       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0116 03:31:57.637096       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0116 03:32:27.041350       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0116 03:32:27.648093       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0116 03:32:57.049243       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0116 03:32:57.657231       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0116 03:33:27.055946       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0116 03:33:27.672702       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [da3ca3a9cda0a5d7ff284cd8e9e069e0ef17c913570e52561f8c7cb8be285047] <==
	I0116 03:13:15.782754       1 server_others.go:69] "Using iptables proxy"
	I0116 03:13:15.798951       1 node.go:141] Successfully retrieved node IP: 192.168.61.150
	I0116 03:13:15.858503       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0116 03:13:15.858593       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0116 03:13:15.863888       1 server_others.go:152] "Using iptables Proxier"
	I0116 03:13:15.863954       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0116 03:13:15.864265       1 server.go:846] "Version info" version="v1.28.4"
	I0116 03:13:15.864301       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0116 03:13:15.865371       1 config.go:188] "Starting service config controller"
	I0116 03:13:15.865420       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0116 03:13:15.865444       1 config.go:97] "Starting endpoint slice config controller"
	I0116 03:13:15.865447       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0116 03:13:15.868385       1 config.go:315] "Starting node config controller"
	I0116 03:13:15.868530       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0116 03:13:15.966532       1 shared_informer.go:318] Caches are synced for service config
	I0116 03:13:15.968545       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0116 03:13:15.968991       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [ab456031061355e2ec86ee36ff52b4245b5ccd2fd1f87da0318e9bbd4ca512e8] <==
	I0116 03:13:11.229387       1 serving.go:348] Generated self-signed cert in-memory
	W0116 03:13:14.179745       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0116 03:13:14.179883       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0116 03:13:14.179938       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0116 03:13:14.179978       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0116 03:13:14.250888       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0116 03:13:14.251012       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0116 03:13:14.257604       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0116 03:13:14.257787       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0116 03:13:14.258930       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0116 03:13:14.259065       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0116 03:13:14.358269       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Tue 2024-01-16 03:12:41 UTC, ends at Tue 2024-01-16 03:33:28 UTC. --
	Jan 16 03:30:58 embed-certs-480663 kubelet[930]: E0116 03:30:58.507411     930 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-7d2fh" podUID="512cf579-f335-4995-8721-74bb84da776e"
	Jan 16 03:31:07 embed-certs-480663 kubelet[930]: E0116 03:31:07.533044     930 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 16 03:31:07 embed-certs-480663 kubelet[930]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 16 03:31:07 embed-certs-480663 kubelet[930]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 16 03:31:07 embed-certs-480663 kubelet[930]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 16 03:31:13 embed-certs-480663 kubelet[930]: E0116 03:31:13.506716     930 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-7d2fh" podUID="512cf579-f335-4995-8721-74bb84da776e"
	Jan 16 03:31:24 embed-certs-480663 kubelet[930]: E0116 03:31:24.506883     930 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-7d2fh" podUID="512cf579-f335-4995-8721-74bb84da776e"
	Jan 16 03:31:39 embed-certs-480663 kubelet[930]: E0116 03:31:39.507378     930 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-7d2fh" podUID="512cf579-f335-4995-8721-74bb84da776e"
	Jan 16 03:31:54 embed-certs-480663 kubelet[930]: E0116 03:31:54.507096     930 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-7d2fh" podUID="512cf579-f335-4995-8721-74bb84da776e"
	Jan 16 03:32:06 embed-certs-480663 kubelet[930]: E0116 03:32:06.506560     930 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-7d2fh" podUID="512cf579-f335-4995-8721-74bb84da776e"
	Jan 16 03:32:07 embed-certs-480663 kubelet[930]: E0116 03:32:07.532896     930 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 16 03:32:07 embed-certs-480663 kubelet[930]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 16 03:32:07 embed-certs-480663 kubelet[930]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 16 03:32:07 embed-certs-480663 kubelet[930]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 16 03:32:18 embed-certs-480663 kubelet[930]: E0116 03:32:18.511730     930 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-7d2fh" podUID="512cf579-f335-4995-8721-74bb84da776e"
	Jan 16 03:32:32 embed-certs-480663 kubelet[930]: E0116 03:32:32.506954     930 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-7d2fh" podUID="512cf579-f335-4995-8721-74bb84da776e"
	Jan 16 03:32:45 embed-certs-480663 kubelet[930]: E0116 03:32:45.507380     930 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-7d2fh" podUID="512cf579-f335-4995-8721-74bb84da776e"
	Jan 16 03:32:56 embed-certs-480663 kubelet[930]: E0116 03:32:56.507272     930 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-7d2fh" podUID="512cf579-f335-4995-8721-74bb84da776e"
	Jan 16 03:33:07 embed-certs-480663 kubelet[930]: E0116 03:33:07.512687     930 container_manager_linux.go:514] "Failed to find cgroups of kubelet" err="cpu and memory cgroup hierarchy not unified.  cpu: /, memory: /system.slice/kubelet.service"
	Jan 16 03:33:07 embed-certs-480663 kubelet[930]: E0116 03:33:07.539356     930 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 16 03:33:07 embed-certs-480663 kubelet[930]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 16 03:33:07 embed-certs-480663 kubelet[930]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 16 03:33:07 embed-certs-480663 kubelet[930]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 16 03:33:10 embed-certs-480663 kubelet[930]: E0116 03:33:10.508318     930 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-7d2fh" podUID="512cf579-f335-4995-8721-74bb84da776e"
	Jan 16 03:33:24 embed-certs-480663 kubelet[930]: E0116 03:33:24.507128     930 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-7d2fh" podUID="512cf579-f335-4995-8721-74bb84da776e"
	
	
	==> storage-provisioner [0f37f0f7c73398c72353bd7fa20c656f6c90dffa7c4d73d01f9c6d8804319657] <==
	I0116 03:13:46.907858       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0116 03:13:46.922741       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0116 03:13:46.922839       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0116 03:14:04.372489       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0116 03:14:04.373472       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-480663_02385622-396e-4f2a-a1a7-96b11526d536!
	I0116 03:14:04.381289       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"74d40386-f551-4067-ae35-b700d12b05b3", APIVersion:"v1", ResourceVersion:"609", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-480663_02385622-396e-4f2a-a1a7-96b11526d536 became leader
	I0116 03:14:04.474327       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-480663_02385622-396e-4f2a-a1a7-96b11526d536!
	
	
	==> storage-provisioner [653a87cc5b4e5bb52f728e222fc7a58f19453b0263453d6c259e81a206ffac76] <==
	I0116 03:13:15.748004       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0116 03:13:45.750999       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-480663 -n embed-certs-480663
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-480663 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-7d2fh
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-480663 describe pod metrics-server-57f55c9bc5-7d2fh
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-480663 describe pod metrics-server-57f55c9bc5-7d2fh: exit status 1 (92.785356ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-7d2fh" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-480663 describe pod metrics-server-57f55c9bc5-7d2fh: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (404.64s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (174.82s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0116 03:29:50.170235  978482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/ingress-addon-legacy-473102/client.crt: no such file or directory
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-788237 -n old-k8s-version-788237
start_stop_delete_test.go:287: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-01-16 03:32:00.760371601 +0000 UTC m=+5498.644198186
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-788237 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-788237 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.826µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-788237 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-788237 -n old-k8s-version-788237
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-788237 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-788237 logs -n 25: (1.918669802s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p cert-expiration-920153                              | cert-expiration-920153       | jenkins | v1.32.0 | 16 Jan 24 03:03 UTC | 16 Jan 24 03:03 UTC |
	| start   | -p embed-certs-480663                                  | embed-certs-480663           | jenkins | v1.32.0 | 16 Jan 24 03:03 UTC | 16 Jan 24 03:05 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| pause   | -p pause-100619                                        | pause-100619                 | jenkins | v1.32.0 | 16 Jan 24 03:03 UTC | 16 Jan 24 03:03 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| unpause | -p pause-100619                                        | pause-100619                 | jenkins | v1.32.0 | 16 Jan 24 03:03 UTC | 16 Jan 24 03:03 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| pause   | -p pause-100619                                        | pause-100619                 | jenkins | v1.32.0 | 16 Jan 24 03:03 UTC | 16 Jan 24 03:03 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| delete  | -p pause-100619                                        | pause-100619                 | jenkins | v1.32.0 | 16 Jan 24 03:03 UTC | 16 Jan 24 03:03 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| delete  | -p pause-100619                                        | pause-100619                 | jenkins | v1.32.0 | 16 Jan 24 03:03 UTC | 16 Jan 24 03:03 UTC |
	| delete  | -p                                                     | disable-driver-mounts-807979 | jenkins | v1.32.0 | 16 Jan 24 03:03 UTC | 16 Jan 24 03:03 UTC |
	|         | disable-driver-mounts-807979                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-775571 | jenkins | v1.32.0 | 16 Jan 24 03:03 UTC | 16 Jan 24 03:06 UTC |
	|         | default-k8s-diff-port-775571                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-934668             | no-preload-934668            | jenkins | v1.32.0 | 16 Jan 24 03:05 UTC | 16 Jan 24 03:05 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-934668                                   | no-preload-934668            | jenkins | v1.32.0 | 16 Jan 24 03:05 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-480663            | embed-certs-480663           | jenkins | v1.32.0 | 16 Jan 24 03:05 UTC | 16 Jan 24 03:05 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-480663                                  | embed-certs-480663           | jenkins | v1.32.0 | 16 Jan 24 03:05 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-788237        | old-k8s-version-788237       | jenkins | v1.32.0 | 16 Jan 24 03:05 UTC | 16 Jan 24 03:05 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-788237                              | old-k8s-version-788237       | jenkins | v1.32.0 | 16 Jan 24 03:05 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-775571  | default-k8s-diff-port-775571 | jenkins | v1.32.0 | 16 Jan 24 03:06 UTC | 16 Jan 24 03:06 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-775571 | jenkins | v1.32.0 | 16 Jan 24 03:06 UTC |                     |
	|         | default-k8s-diff-port-775571                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-934668                  | no-preload-934668            | jenkins | v1.32.0 | 16 Jan 24 03:07 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-480663                 | embed-certs-480663           | jenkins | v1.32.0 | 16 Jan 24 03:07 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-934668                                   | no-preload-934668            | jenkins | v1.32.0 | 16 Jan 24 03:07 UTC | 16 Jan 24 03:24 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| start   | -p embed-certs-480663                                  | embed-certs-480663           | jenkins | v1.32.0 | 16 Jan 24 03:07 UTC | 16 Jan 24 03:17 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-788237             | old-k8s-version-788237       | jenkins | v1.32.0 | 16 Jan 24 03:08 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-788237                              | old-k8s-version-788237       | jenkins | v1.32.0 | 16 Jan 24 03:08 UTC | 16 Jan 24 03:20 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-775571       | default-k8s-diff-port-775571 | jenkins | v1.32.0 | 16 Jan 24 03:08 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-775571 | jenkins | v1.32.0 | 16 Jan 24 03:08 UTC | 16 Jan 24 03:23 UTC |
	|         | default-k8s-diff-port-775571                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/16 03:08:55
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0116 03:08:55.523172 1011955 out.go:296] Setting OutFile to fd 1 ...
	I0116 03:08:55.523367 1011955 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 03:08:55.523379 1011955 out.go:309] Setting ErrFile to fd 2...
	I0116 03:08:55.523384 1011955 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 03:08:55.523559 1011955 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17967-971255/.minikube/bin
	I0116 03:08:55.524097 1011955 out.go:303] Setting JSON to false
	I0116 03:08:55.525108 1011955 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":13885,"bootTime":1705360651,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1048-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0116 03:08:55.525170 1011955 start.go:138] virtualization: kvm guest
	I0116 03:08:55.527591 1011955 out.go:177] * [default-k8s-diff-port-775571] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0116 03:08:55.529034 1011955 out.go:177]   - MINIKUBE_LOCATION=17967
	I0116 03:08:55.529110 1011955 notify.go:220] Checking for updates...
	I0116 03:08:55.530388 1011955 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0116 03:08:55.531787 1011955 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17967-971255/kubeconfig
	I0116 03:08:55.533364 1011955 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17967-971255/.minikube
	I0116 03:08:55.534716 1011955 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0116 03:08:55.535979 1011955 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0116 03:08:55.537715 1011955 config.go:182] Loaded profile config "default-k8s-diff-port-775571": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 03:08:55.538436 1011955 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:08:55.538496 1011955 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:08:55.553180 1011955 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44155
	I0116 03:08:55.553640 1011955 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:08:55.554204 1011955 main.go:141] libmachine: Using API Version  1
	I0116 03:08:55.554227 1011955 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:08:55.554581 1011955 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:08:55.554799 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .DriverName
	I0116 03:08:55.555037 1011955 driver.go:392] Setting default libvirt URI to qemu:///system
	I0116 03:08:55.555380 1011955 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:08:55.555442 1011955 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:08:55.570254 1011955 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46641
	I0116 03:08:55.570682 1011955 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:08:55.571208 1011955 main.go:141] libmachine: Using API Version  1
	I0116 03:08:55.571235 1011955 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:08:55.571622 1011955 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:08:55.571835 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .DriverName
	I0116 03:08:55.608921 1011955 out.go:177] * Using the kvm2 driver based on existing profile
	I0116 03:08:55.610466 1011955 start.go:298] selected driver: kvm2
	I0116 03:08:55.610482 1011955 start.go:902] validating driver "kvm2" against &{Name:default-k8s-diff-port-775571 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuber
netesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-775571 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.72.158 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRe
quested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 03:08:55.610637 1011955 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0116 03:08:55.611416 1011955 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0116 03:08:55.611501 1011955 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17967-971255/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0116 03:08:55.627062 1011955 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0116 03:08:55.627489 1011955 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0116 03:08:55.627568 1011955 cni.go:84] Creating CNI manager for ""
	I0116 03:08:55.627585 1011955 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 03:08:55.627598 1011955 start_flags.go:321] config:
	{Name:default-k8s-diff-port-775571 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-77557
1 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.72.158 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 03:08:55.627820 1011955 iso.go:125] acquiring lock: {Name:mkc0ce0b4b435c5fb7570521b794e5982bb7bf6d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0116 03:08:55.630054 1011955 out.go:177] * Starting control plane node default-k8s-diff-port-775571 in cluster default-k8s-diff-port-775571
	I0116 03:08:56.294081 1011460 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.29:22: connect: no route to host
	I0116 03:08:55.631888 1011955 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0116 03:08:55.631938 1011955 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17967-971255/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0116 03:08:55.631953 1011955 cache.go:56] Caching tarball of preloaded images
	I0116 03:08:55.632083 1011955 preload.go:174] Found /home/jenkins/minikube-integration/17967-971255/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0116 03:08:55.632097 1011955 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0116 03:08:55.632257 1011955 profile.go:148] Saving config to /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/default-k8s-diff-port-775571/config.json ...
	I0116 03:08:55.632487 1011955 start.go:365] acquiring machines lock for default-k8s-diff-port-775571: {Name:mk61734672272adcae1bdaf20e72828e69d219db Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0116 03:08:59.366084 1011460 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.29:22: connect: no route to host
	I0116 03:09:05.446075 1011460 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.29:22: connect: no route to host
	I0116 03:09:08.518122 1011460 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.29:22: connect: no route to host
	I0116 03:09:14.598126 1011460 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.29:22: connect: no route to host
	I0116 03:09:17.670148 1011460 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.29:22: connect: no route to host
	I0116 03:09:23.750127 1011460 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.29:22: connect: no route to host
	I0116 03:09:26.822075 1011460 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.29:22: connect: no route to host
	I0116 03:09:32.902064 1011460 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.29:22: connect: no route to host
	I0116 03:09:35.974222 1011460 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.29:22: connect: no route to host
	I0116 03:09:42.054100 1011460 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.29:22: connect: no route to host
	I0116 03:09:45.126136 1011460 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.29:22: connect: no route to host
	I0116 03:09:51.206133 1011460 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.29:22: connect: no route to host
	I0116 03:09:54.278161 1011460 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.29:22: connect: no route to host
	I0116 03:10:00.358119 1011460 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.29:22: connect: no route to host
	I0116 03:10:03.430197 1011460 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.29:22: connect: no route to host
	I0116 03:10:09.510091 1011460 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.29:22: connect: no route to host
	I0116 03:10:12.582128 1011460 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.29:22: connect: no route to host
	I0116 03:10:18.662160 1011460 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.29:22: connect: no route to host
	I0116 03:10:21.734193 1011460 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.29:22: connect: no route to host
	I0116 03:10:27.814164 1011460 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.29:22: connect: no route to host
	I0116 03:10:30.886157 1011460 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.29:22: connect: no route to host
	I0116 03:10:36.966149 1011460 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.29:22: connect: no route to host
	I0116 03:10:40.038146 1011460 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.29:22: connect: no route to host
	I0116 03:10:46.118124 1011460 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.29:22: connect: no route to host
	I0116 03:10:49.190101 1011460 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.29:22: connect: no route to host
	I0116 03:10:55.269989 1011460 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.29:22: connect: no route to host
	I0116 03:10:58.342124 1011460 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.29:22: connect: no route to host
	I0116 03:11:04.422158 1011460 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.29:22: connect: no route to host
	I0116 03:11:07.494110 1011460 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.29:22: connect: no route to host
	I0116 03:11:13.574119 1011460 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.29:22: connect: no route to host
	I0116 03:11:16.646126 1011460 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.29:22: connect: no route to host
	I0116 03:11:22.726139 1011460 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.29:22: connect: no route to host
	I0116 03:11:25.798139 1011460 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.29:22: connect: no route to host
	I0116 03:11:31.878112 1011460 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.29:22: connect: no route to host
	I0116 03:11:34.950159 1011460 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.29:22: connect: no route to host
	I0116 03:11:41.030157 1011460 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.29:22: connect: no route to host
	I0116 03:11:44.102169 1011460 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.29:22: connect: no route to host
	I0116 03:11:50.182089 1011460 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.29:22: connect: no route to host
	I0116 03:11:53.254213 1011460 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.29:22: connect: no route to host
	I0116 03:11:59.334156 1011460 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.29:22: connect: no route to host
	I0116 03:12:02.406103 1011460 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.29:22: connect: no route to host
	I0116 03:12:08.486171 1011460 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.29:22: connect: no route to host
	I0116 03:12:11.558273 1011460 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.29:22: connect: no route to host
	I0116 03:12:17.638145 1011460 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.29:22: connect: no route to host
	I0116 03:12:20.710185 1011460 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.29:22: connect: no route to host
	I0116 03:12:26.790125 1011460 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.29:22: connect: no route to host
	I0116 03:12:29.794327 1011501 start.go:369] acquired machines lock for "embed-certs-480663" in 4m35.850983647s
	I0116 03:12:29.794418 1011501 start.go:96] Skipping create...Using existing machine configuration
	I0116 03:12:29.794429 1011501 fix.go:54] fixHost starting: 
	I0116 03:12:29.794787 1011501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:12:29.794827 1011501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:12:29.810363 1011501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40317
	I0116 03:12:29.810847 1011501 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:12:29.811350 1011501 main.go:141] libmachine: Using API Version  1
	I0116 03:12:29.811377 1011501 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:12:29.811743 1011501 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:12:29.811943 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .DriverName
	I0116 03:12:29.812098 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetState
	I0116 03:12:29.813836 1011501 fix.go:102] recreateIfNeeded on embed-certs-480663: state=Stopped err=<nil>
	I0116 03:12:29.813863 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .DriverName
	W0116 03:12:29.814085 1011501 fix.go:128] unexpected machine state, will restart: <nil>
	I0116 03:12:29.816073 1011501 out.go:177] * Restarting existing kvm2 VM for "embed-certs-480663" ...
	I0116 03:12:29.792154 1011460 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0116 03:12:29.792196 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetSSHHostname
	I0116 03:12:29.794110 1011460 machine.go:91] provisioned docker machine in 4m37.362238239s
	I0116 03:12:29.794181 1011460 fix.go:56] fixHost completed within 4m37.38762384s
	I0116 03:12:29.794190 1011460 start.go:83] releasing machines lock for "no-preload-934668", held for 4m37.387657639s
	W0116 03:12:29.794218 1011460 start.go:694] error starting host: provision: host is not running
	W0116 03:12:29.794363 1011460 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0116 03:12:29.794373 1011460 start.go:709] Will try again in 5 seconds ...
	I0116 03:12:29.817479 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .Start
	I0116 03:12:29.817644 1011501 main.go:141] libmachine: (embed-certs-480663) Ensuring networks are active...
	I0116 03:12:29.818499 1011501 main.go:141] libmachine: (embed-certs-480663) Ensuring network default is active
	I0116 03:12:29.818799 1011501 main.go:141] libmachine: (embed-certs-480663) Ensuring network mk-embed-certs-480663 is active
	I0116 03:12:29.819175 1011501 main.go:141] libmachine: (embed-certs-480663) Getting domain xml...
	I0116 03:12:29.819788 1011501 main.go:141] libmachine: (embed-certs-480663) Creating domain...
	I0116 03:12:31.021602 1011501 main.go:141] libmachine: (embed-certs-480663) Waiting to get IP...
	I0116 03:12:31.022948 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | domain embed-certs-480663 has defined MAC address 52:54:00:1c:0e:bd in network mk-embed-certs-480663
	I0116 03:12:31.023338 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | unable to find current IP address of domain embed-certs-480663 in network mk-embed-certs-480663
	I0116 03:12:31.023411 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | I0116 03:12:31.023303 1012490 retry.go:31] will retry after 276.789085ms: waiting for machine to come up
	I0116 03:12:31.301941 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | domain embed-certs-480663 has defined MAC address 52:54:00:1c:0e:bd in network mk-embed-certs-480663
	I0116 03:12:31.302463 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | unable to find current IP address of domain embed-certs-480663 in network mk-embed-certs-480663
	I0116 03:12:31.302500 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | I0116 03:12:31.302382 1012490 retry.go:31] will retry after 256.134625ms: waiting for machine to come up
	I0116 03:12:31.560002 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | domain embed-certs-480663 has defined MAC address 52:54:00:1c:0e:bd in network mk-embed-certs-480663
	I0116 03:12:31.560544 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | unable to find current IP address of domain embed-certs-480663 in network mk-embed-certs-480663
	I0116 03:12:31.560571 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | I0116 03:12:31.560490 1012490 retry.go:31] will retry after 439.008262ms: waiting for machine to come up
	I0116 03:12:32.001188 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | domain embed-certs-480663 has defined MAC address 52:54:00:1c:0e:bd in network mk-embed-certs-480663
	I0116 03:12:32.001642 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | unable to find current IP address of domain embed-certs-480663 in network mk-embed-certs-480663
	I0116 03:12:32.001679 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | I0116 03:12:32.001577 1012490 retry.go:31] will retry after 408.362832ms: waiting for machine to come up
	I0116 03:12:32.411058 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | domain embed-certs-480663 has defined MAC address 52:54:00:1c:0e:bd in network mk-embed-certs-480663
	I0116 03:12:32.411391 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | unable to find current IP address of domain embed-certs-480663 in network mk-embed-certs-480663
	I0116 03:12:32.411423 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | I0116 03:12:32.411337 1012490 retry.go:31] will retry after 734.236059ms: waiting for machine to come up
	I0116 03:12:33.146871 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | domain embed-certs-480663 has defined MAC address 52:54:00:1c:0e:bd in network mk-embed-certs-480663
	I0116 03:12:33.147227 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | unable to find current IP address of domain embed-certs-480663 in network mk-embed-certs-480663
	I0116 03:12:33.147255 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | I0116 03:12:33.147168 1012490 retry.go:31] will retry after 675.663635ms: waiting for machine to come up
	I0116 03:12:33.824145 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | domain embed-certs-480663 has defined MAC address 52:54:00:1c:0e:bd in network mk-embed-certs-480663
	I0116 03:12:33.824670 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | unable to find current IP address of domain embed-certs-480663 in network mk-embed-certs-480663
	I0116 03:12:33.824702 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | I0116 03:12:33.824595 1012490 retry.go:31] will retry after 759.820531ms: waiting for machine to come up
	I0116 03:12:34.796140 1011460 start.go:365] acquiring machines lock for no-preload-934668: {Name:mk61734672272adcae1bdaf20e72828e69d219db Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0116 03:12:34.585458 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | domain embed-certs-480663 has defined MAC address 52:54:00:1c:0e:bd in network mk-embed-certs-480663
	I0116 03:12:34.585893 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | unable to find current IP address of domain embed-certs-480663 in network mk-embed-certs-480663
	I0116 03:12:34.585919 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | I0116 03:12:34.585853 1012490 retry.go:31] will retry after 1.421527223s: waiting for machine to come up
	I0116 03:12:36.008778 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | domain embed-certs-480663 has defined MAC address 52:54:00:1c:0e:bd in network mk-embed-certs-480663
	I0116 03:12:36.009237 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | unable to find current IP address of domain embed-certs-480663 in network mk-embed-certs-480663
	I0116 03:12:36.009263 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | I0116 03:12:36.009198 1012490 retry.go:31] will retry after 1.590569463s: waiting for machine to come up
	I0116 03:12:37.601872 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | domain embed-certs-480663 has defined MAC address 52:54:00:1c:0e:bd in network mk-embed-certs-480663
	I0116 03:12:37.602247 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | unable to find current IP address of domain embed-certs-480663 in network mk-embed-certs-480663
	I0116 03:12:37.602280 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | I0116 03:12:37.602215 1012490 retry.go:31] will retry after 1.734508863s: waiting for machine to come up
	I0116 03:12:39.339028 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | domain embed-certs-480663 has defined MAC address 52:54:00:1c:0e:bd in network mk-embed-certs-480663
	I0116 03:12:39.339618 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | unable to find current IP address of domain embed-certs-480663 in network mk-embed-certs-480663
	I0116 03:12:39.339652 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | I0116 03:12:39.339547 1012490 retry.go:31] will retry after 2.357594548s: waiting for machine to come up
	I0116 03:12:41.699172 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | domain embed-certs-480663 has defined MAC address 52:54:00:1c:0e:bd in network mk-embed-certs-480663
	I0116 03:12:41.699607 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | unable to find current IP address of domain embed-certs-480663 in network mk-embed-certs-480663
	I0116 03:12:41.699679 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | I0116 03:12:41.699610 1012490 retry.go:31] will retry after 2.660303994s: waiting for machine to come up
	I0116 03:12:44.362811 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | domain embed-certs-480663 has defined MAC address 52:54:00:1c:0e:bd in network mk-embed-certs-480663
	I0116 03:12:44.363139 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | unable to find current IP address of domain embed-certs-480663 in network mk-embed-certs-480663
	I0116 03:12:44.363173 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | I0116 03:12:44.363109 1012490 retry.go:31] will retry after 3.358505884s: waiting for machine to come up
	I0116 03:12:47.725123 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | domain embed-certs-480663 has defined MAC address 52:54:00:1c:0e:bd in network mk-embed-certs-480663
	I0116 03:12:47.725787 1011501 main.go:141] libmachine: (embed-certs-480663) Found IP for machine: 192.168.61.150
	I0116 03:12:47.725838 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | domain embed-certs-480663 has current primary IP address 192.168.61.150 and MAC address 52:54:00:1c:0e:bd in network mk-embed-certs-480663
	I0116 03:12:47.725847 1011501 main.go:141] libmachine: (embed-certs-480663) Reserving static IP address...
	I0116 03:12:47.726433 1011501 main.go:141] libmachine: (embed-certs-480663) Reserved static IP address: 192.168.61.150
	I0116 03:12:47.726458 1011501 main.go:141] libmachine: (embed-certs-480663) Waiting for SSH to be available...
	I0116 03:12:47.726486 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | found host DHCP lease matching {name: "embed-certs-480663", mac: "52:54:00:1c:0e:bd", ip: "192.168.61.150"} in network mk-embed-certs-480663: {Iface:virbr1 ExpiryTime:2024-01-16 04:04:18 +0000 UTC Type:0 Mac:52:54:00:1c:0e:bd Iaid: IPaddr:192.168.61.150 Prefix:24 Hostname:embed-certs-480663 Clientid:01:52:54:00:1c:0e:bd}
	I0116 03:12:47.726546 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | skip adding static IP to network mk-embed-certs-480663 - found existing host DHCP lease matching {name: "embed-certs-480663", mac: "52:54:00:1c:0e:bd", ip: "192.168.61.150"}
	I0116 03:12:47.726579 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | Getting to WaitForSSH function...
	I0116 03:12:47.728781 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | domain embed-certs-480663 has defined MAC address 52:54:00:1c:0e:bd in network mk-embed-certs-480663
	I0116 03:12:47.729264 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:0e:bd", ip: ""} in network mk-embed-certs-480663: {Iface:virbr1 ExpiryTime:2024-01-16 04:04:18 +0000 UTC Type:0 Mac:52:54:00:1c:0e:bd Iaid: IPaddr:192.168.61.150 Prefix:24 Hostname:embed-certs-480663 Clientid:01:52:54:00:1c:0e:bd}
	I0116 03:12:47.729316 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | domain embed-certs-480663 has defined IP address 192.168.61.150 and MAC address 52:54:00:1c:0e:bd in network mk-embed-certs-480663
	I0116 03:12:47.729447 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | Using SSH client type: external
	I0116 03:12:47.729484 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | Using SSH private key: /home/jenkins/minikube-integration/17967-971255/.minikube/machines/embed-certs-480663/id_rsa (-rw-------)
	I0116 03:12:47.729519 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.150 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17967-971255/.minikube/machines/embed-certs-480663/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0116 03:12:47.729530 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | About to run SSH command:
	I0116 03:12:47.729542 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | exit 0
	I0116 03:12:47.817660 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | SSH cmd err, output: <nil>: 
	I0116 03:12:47.818207 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetConfigRaw
	I0116 03:12:47.818904 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetIP
	I0116 03:12:47.821493 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | domain embed-certs-480663 has defined MAC address 52:54:00:1c:0e:bd in network mk-embed-certs-480663
	I0116 03:12:47.821899 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:0e:bd", ip: ""} in network mk-embed-certs-480663: {Iface:virbr1 ExpiryTime:2024-01-16 04:04:18 +0000 UTC Type:0 Mac:52:54:00:1c:0e:bd Iaid: IPaddr:192.168.61.150 Prefix:24 Hostname:embed-certs-480663 Clientid:01:52:54:00:1c:0e:bd}
	I0116 03:12:47.821938 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | domain embed-certs-480663 has defined IP address 192.168.61.150 and MAC address 52:54:00:1c:0e:bd in network mk-embed-certs-480663
	I0116 03:12:47.822249 1011501 profile.go:148] Saving config to /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/embed-certs-480663/config.json ...
	I0116 03:12:47.822458 1011501 machine.go:88] provisioning docker machine ...
	I0116 03:12:47.822477 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .DriverName
	I0116 03:12:47.822718 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetMachineName
	I0116 03:12:47.822914 1011501 buildroot.go:166] provisioning hostname "embed-certs-480663"
	I0116 03:12:47.822936 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetMachineName
	I0116 03:12:47.823106 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetSSHHostname
	I0116 03:12:47.825414 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | domain embed-certs-480663 has defined MAC address 52:54:00:1c:0e:bd in network mk-embed-certs-480663
	I0116 03:12:47.825772 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:0e:bd", ip: ""} in network mk-embed-certs-480663: {Iface:virbr1 ExpiryTime:2024-01-16 04:04:18 +0000 UTC Type:0 Mac:52:54:00:1c:0e:bd Iaid: IPaddr:192.168.61.150 Prefix:24 Hostname:embed-certs-480663 Clientid:01:52:54:00:1c:0e:bd}
	I0116 03:12:47.825821 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | domain embed-certs-480663 has defined IP address 192.168.61.150 and MAC address 52:54:00:1c:0e:bd in network mk-embed-certs-480663
	I0116 03:12:47.825982 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetSSHPort
	I0116 03:12:47.826176 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetSSHKeyPath
	I0116 03:12:47.826353 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetSSHKeyPath
	I0116 03:12:47.826513 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetSSHUsername
	I0116 03:12:47.826691 1011501 main.go:141] libmachine: Using SSH client type: native
	I0116 03:12:47.827071 1011501 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.61.150 22 <nil> <nil>}
	I0116 03:12:47.827091 1011501 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-480663 && echo "embed-certs-480663" | sudo tee /etc/hostname
	I0116 03:12:47.955360 1011501 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-480663
	
	I0116 03:12:47.955398 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetSSHHostname
	I0116 03:12:47.958259 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | domain embed-certs-480663 has defined MAC address 52:54:00:1c:0e:bd in network mk-embed-certs-480663
	I0116 03:12:47.958575 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:0e:bd", ip: ""} in network mk-embed-certs-480663: {Iface:virbr1 ExpiryTime:2024-01-16 04:04:18 +0000 UTC Type:0 Mac:52:54:00:1c:0e:bd Iaid: IPaddr:192.168.61.150 Prefix:24 Hostname:embed-certs-480663 Clientid:01:52:54:00:1c:0e:bd}
	I0116 03:12:47.958607 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | domain embed-certs-480663 has defined IP address 192.168.61.150 and MAC address 52:54:00:1c:0e:bd in network mk-embed-certs-480663
	I0116 03:12:47.958814 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetSSHPort
	I0116 03:12:47.959044 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetSSHKeyPath
	I0116 03:12:47.959202 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetSSHKeyPath
	I0116 03:12:47.959343 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetSSHUsername
	I0116 03:12:47.959496 1011501 main.go:141] libmachine: Using SSH client type: native
	I0116 03:12:47.959863 1011501 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.61.150 22 <nil> <nil>}
	I0116 03:12:47.959892 1011501 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-480663' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-480663/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-480663' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0116 03:12:48.082423 1011501 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0116 03:12:48.082457 1011501 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17967-971255/.minikube CaCertPath:/home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17967-971255/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17967-971255/.minikube}
	I0116 03:12:48.082515 1011501 buildroot.go:174] setting up certificates
	I0116 03:12:48.082553 1011501 provision.go:83] configureAuth start
	I0116 03:12:48.082569 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetMachineName
	I0116 03:12:48.082866 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetIP
	I0116 03:12:48.085315 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | domain embed-certs-480663 has defined MAC address 52:54:00:1c:0e:bd in network mk-embed-certs-480663
	I0116 03:12:48.085590 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:0e:bd", ip: ""} in network mk-embed-certs-480663: {Iface:virbr1 ExpiryTime:2024-01-16 04:04:18 +0000 UTC Type:0 Mac:52:54:00:1c:0e:bd Iaid: IPaddr:192.168.61.150 Prefix:24 Hostname:embed-certs-480663 Clientid:01:52:54:00:1c:0e:bd}
	I0116 03:12:48.085622 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | domain embed-certs-480663 has defined IP address 192.168.61.150 and MAC address 52:54:00:1c:0e:bd in network mk-embed-certs-480663
	I0116 03:12:48.085766 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetSSHHostname
	I0116 03:12:48.088029 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | domain embed-certs-480663 has defined MAC address 52:54:00:1c:0e:bd in network mk-embed-certs-480663
	I0116 03:12:48.088306 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:0e:bd", ip: ""} in network mk-embed-certs-480663: {Iface:virbr1 ExpiryTime:2024-01-16 04:04:18 +0000 UTC Type:0 Mac:52:54:00:1c:0e:bd Iaid: IPaddr:192.168.61.150 Prefix:24 Hostname:embed-certs-480663 Clientid:01:52:54:00:1c:0e:bd}
	I0116 03:12:48.088331 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | domain embed-certs-480663 has defined IP address 192.168.61.150 and MAC address 52:54:00:1c:0e:bd in network mk-embed-certs-480663
	I0116 03:12:48.088499 1011501 provision.go:138] copyHostCerts
	I0116 03:12:48.088581 1011501 exec_runner.go:144] found /home/jenkins/minikube-integration/17967-971255/.minikube/ca.pem, removing ...
	I0116 03:12:48.088625 1011501 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17967-971255/.minikube/ca.pem
	I0116 03:12:48.088713 1011501 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17967-971255/.minikube/ca.pem (1082 bytes)
	I0116 03:12:48.088856 1011501 exec_runner.go:144] found /home/jenkins/minikube-integration/17967-971255/.minikube/cert.pem, removing ...
	I0116 03:12:48.088866 1011501 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17967-971255/.minikube/cert.pem
	I0116 03:12:48.088903 1011501 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17967-971255/.minikube/cert.pem (1123 bytes)
	I0116 03:12:48.088981 1011501 exec_runner.go:144] found /home/jenkins/minikube-integration/17967-971255/.minikube/key.pem, removing ...
	I0116 03:12:48.088996 1011501 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17967-971255/.minikube/key.pem
	I0116 03:12:48.089030 1011501 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17967-971255/.minikube/key.pem (1675 bytes)
	I0116 03:12:48.089101 1011501 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17967-971255/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca-key.pem org=jenkins.embed-certs-480663 san=[192.168.61.150 192.168.61.150 localhost 127.0.0.1 minikube embed-certs-480663]
	I0116 03:12:48.160830 1011501 provision.go:172] copyRemoteCerts
	I0116 03:12:48.160903 1011501 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0116 03:12:48.160965 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetSSHHostname
	I0116 03:12:48.163939 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | domain embed-certs-480663 has defined MAC address 52:54:00:1c:0e:bd in network mk-embed-certs-480663
	I0116 03:12:48.164277 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:0e:bd", ip: ""} in network mk-embed-certs-480663: {Iface:virbr1 ExpiryTime:2024-01-16 04:04:18 +0000 UTC Type:0 Mac:52:54:00:1c:0e:bd Iaid: IPaddr:192.168.61.150 Prefix:24 Hostname:embed-certs-480663 Clientid:01:52:54:00:1c:0e:bd}
	I0116 03:12:48.164307 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | domain embed-certs-480663 has defined IP address 192.168.61.150 and MAC address 52:54:00:1c:0e:bd in network mk-embed-certs-480663
	I0116 03:12:48.164531 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetSSHPort
	I0116 03:12:48.164805 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetSSHKeyPath
	I0116 03:12:48.165006 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetSSHUsername
	I0116 03:12:48.165166 1011501 sshutil.go:53] new ssh client: &{IP:192.168.61.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/embed-certs-480663/id_rsa Username:docker}
	I0116 03:12:48.256101 1011501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0116 03:12:48.280042 1011501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0116 03:12:48.303724 1011501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0116 03:12:48.326468 1011501 provision.go:86] duration metric: configureAuth took 243.88726ms
	I0116 03:12:48.326506 1011501 buildroot.go:189] setting minikube options for container-runtime
	I0116 03:12:48.326754 1011501 config.go:182] Loaded profile config "embed-certs-480663": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 03:12:48.326876 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetSSHHostname
	I0116 03:12:48.329344 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | domain embed-certs-480663 has defined MAC address 52:54:00:1c:0e:bd in network mk-embed-certs-480663
	I0116 03:12:48.329821 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:0e:bd", ip: ""} in network mk-embed-certs-480663: {Iface:virbr1 ExpiryTime:2024-01-16 04:04:18 +0000 UTC Type:0 Mac:52:54:00:1c:0e:bd Iaid: IPaddr:192.168.61.150 Prefix:24 Hostname:embed-certs-480663 Clientid:01:52:54:00:1c:0e:bd}
	I0116 03:12:48.329859 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | domain embed-certs-480663 has defined IP address 192.168.61.150 and MAC address 52:54:00:1c:0e:bd in network mk-embed-certs-480663
	I0116 03:12:48.329995 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetSSHPort
	I0116 03:12:48.330217 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetSSHKeyPath
	I0116 03:12:48.330434 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetSSHKeyPath
	I0116 03:12:48.330590 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetSSHUsername
	I0116 03:12:48.330744 1011501 main.go:141] libmachine: Using SSH client type: native
	I0116 03:12:48.331080 1011501 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.61.150 22 <nil> <nil>}
	I0116 03:12:48.331099 1011501 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0116 03:12:48.635409 1011501 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0116 03:12:48.635460 1011501 machine.go:91] provisioned docker machine in 812.972689ms
	I0116 03:12:48.635473 1011501 start.go:300] post-start starting for "embed-certs-480663" (driver="kvm2")
	I0116 03:12:48.635489 1011501 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0116 03:12:48.635520 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .DriverName
	I0116 03:12:48.635975 1011501 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0116 03:12:48.636005 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetSSHHostname
	I0116 03:12:48.638568 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | domain embed-certs-480663 has defined MAC address 52:54:00:1c:0e:bd in network mk-embed-certs-480663
	I0116 03:12:48.638912 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:0e:bd", ip: ""} in network mk-embed-certs-480663: {Iface:virbr1 ExpiryTime:2024-01-16 04:04:18 +0000 UTC Type:0 Mac:52:54:00:1c:0e:bd Iaid: IPaddr:192.168.61.150 Prefix:24 Hostname:embed-certs-480663 Clientid:01:52:54:00:1c:0e:bd}
	I0116 03:12:48.638947 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | domain embed-certs-480663 has defined IP address 192.168.61.150 and MAC address 52:54:00:1c:0e:bd in network mk-embed-certs-480663
	I0116 03:12:48.639052 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetSSHPort
	I0116 03:12:48.639272 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetSSHKeyPath
	I0116 03:12:48.639448 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetSSHUsername
	I0116 03:12:48.639608 1011501 sshutil.go:53] new ssh client: &{IP:192.168.61.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/embed-certs-480663/id_rsa Username:docker}
	I0116 03:12:48.729202 1011501 ssh_runner.go:195] Run: cat /etc/os-release
	I0116 03:12:48.733911 1011501 info.go:137] Remote host: Buildroot 2021.02.12
	I0116 03:12:48.733985 1011501 filesync.go:126] Scanning /home/jenkins/minikube-integration/17967-971255/.minikube/addons for local assets ...
	I0116 03:12:48.734062 1011501 filesync.go:126] Scanning /home/jenkins/minikube-integration/17967-971255/.minikube/files for local assets ...
	I0116 03:12:48.734185 1011501 filesync.go:149] local asset: /home/jenkins/minikube-integration/17967-971255/.minikube/files/etc/ssl/certs/9784822.pem -> 9784822.pem in /etc/ssl/certs
	I0116 03:12:48.734437 1011501 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0116 03:12:48.744474 1011501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/files/etc/ssl/certs/9784822.pem --> /etc/ssl/certs/9784822.pem (1708 bytes)
	I0116 03:12:48.767453 1011501 start.go:303] post-start completed in 131.962731ms
	I0116 03:12:48.767483 1011501 fix.go:56] fixHost completed within 18.973054797s
	I0116 03:12:48.767537 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetSSHHostname
	I0116 03:12:48.770091 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | domain embed-certs-480663 has defined MAC address 52:54:00:1c:0e:bd in network mk-embed-certs-480663
	I0116 03:12:48.770364 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:0e:bd", ip: ""} in network mk-embed-certs-480663: {Iface:virbr1 ExpiryTime:2024-01-16 04:04:18 +0000 UTC Type:0 Mac:52:54:00:1c:0e:bd Iaid: IPaddr:192.168.61.150 Prefix:24 Hostname:embed-certs-480663 Clientid:01:52:54:00:1c:0e:bd}
	I0116 03:12:48.770410 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | domain embed-certs-480663 has defined IP address 192.168.61.150 and MAC address 52:54:00:1c:0e:bd in network mk-embed-certs-480663
	I0116 03:12:48.770516 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetSSHPort
	I0116 03:12:48.770700 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetSSHKeyPath
	I0116 03:12:48.770885 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetSSHKeyPath
	I0116 03:12:48.771062 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetSSHUsername
	I0116 03:12:48.771258 1011501 main.go:141] libmachine: Using SSH client type: native
	I0116 03:12:48.771725 1011501 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.61.150 22 <nil> <nil>}
	I0116 03:12:48.771743 1011501 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0116 03:12:48.886832 1011681 start.go:369] acquired machines lock for "old-k8s-version-788237" in 4m28.568927849s
	I0116 03:12:48.886918 1011681 start.go:96] Skipping create...Using existing machine configuration
	I0116 03:12:48.886930 1011681 fix.go:54] fixHost starting: 
	I0116 03:12:48.887453 1011681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:12:48.887501 1011681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:12:48.904045 1011681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43423
	I0116 03:12:48.904557 1011681 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:12:48.905072 1011681 main.go:141] libmachine: Using API Version  1
	I0116 03:12:48.905099 1011681 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:12:48.905518 1011681 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:12:48.905746 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .DriverName
	I0116 03:12:48.905912 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetState
	I0116 03:12:48.907596 1011681 fix.go:102] recreateIfNeeded on old-k8s-version-788237: state=Stopped err=<nil>
	I0116 03:12:48.907628 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .DriverName
	W0116 03:12:48.907820 1011681 fix.go:128] unexpected machine state, will restart: <nil>
	I0116 03:12:48.909761 1011681 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-788237" ...
	I0116 03:12:48.911234 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .Start
	I0116 03:12:48.911413 1011681 main.go:141] libmachine: (old-k8s-version-788237) Ensuring networks are active...
	I0116 03:12:48.912247 1011681 main.go:141] libmachine: (old-k8s-version-788237) Ensuring network default is active
	I0116 03:12:48.912596 1011681 main.go:141] libmachine: (old-k8s-version-788237) Ensuring network mk-old-k8s-version-788237 is active
	I0116 03:12:48.913077 1011681 main.go:141] libmachine: (old-k8s-version-788237) Getting domain xml...
	I0116 03:12:48.913678 1011681 main.go:141] libmachine: (old-k8s-version-788237) Creating domain...
	I0116 03:12:50.157059 1011681 main.go:141] libmachine: (old-k8s-version-788237) Waiting to get IP...
	I0116 03:12:50.158170 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | domain old-k8s-version-788237 has defined MAC address 52:54:00:64:b7:2e in network mk-old-k8s-version-788237
	I0116 03:12:50.158626 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | unable to find current IP address of domain old-k8s-version-788237 in network mk-old-k8s-version-788237
	I0116 03:12:50.158723 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | I0116 03:12:50.158597 1012611 retry.go:31] will retry after 219.259678ms: waiting for machine to come up
	I0116 03:12:48.886627 1011501 main.go:141] libmachine: SSH cmd err, output: <nil>: 1705374768.861682880
	
	I0116 03:12:48.886687 1011501 fix.go:206] guest clock: 1705374768.861682880
	I0116 03:12:48.886698 1011501 fix.go:219] Guest: 2024-01-16 03:12:48.86168288 +0000 UTC Remote: 2024-01-16 03:12:48.767487292 +0000 UTC m=+294.991502995 (delta=94.195588ms)
	I0116 03:12:48.886721 1011501 fix.go:190] guest clock delta is within tolerance: 94.195588ms
	I0116 03:12:48.886726 1011501 start.go:83] releasing machines lock for "embed-certs-480663", held for 19.09234257s
	I0116 03:12:48.886751 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .DriverName
	I0116 03:12:48.887062 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetIP
	I0116 03:12:48.889754 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | domain embed-certs-480663 has defined MAC address 52:54:00:1c:0e:bd in network mk-embed-certs-480663
	I0116 03:12:48.890098 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:0e:bd", ip: ""} in network mk-embed-certs-480663: {Iface:virbr1 ExpiryTime:2024-01-16 04:04:18 +0000 UTC Type:0 Mac:52:54:00:1c:0e:bd Iaid: IPaddr:192.168.61.150 Prefix:24 Hostname:embed-certs-480663 Clientid:01:52:54:00:1c:0e:bd}
	I0116 03:12:48.890128 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | domain embed-certs-480663 has defined IP address 192.168.61.150 and MAC address 52:54:00:1c:0e:bd in network mk-embed-certs-480663
	I0116 03:12:48.890347 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .DriverName
	I0116 03:12:48.890906 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .DriverName
	I0116 03:12:48.891124 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .DriverName
	I0116 03:12:48.891223 1011501 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0116 03:12:48.891269 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetSSHHostname
	I0116 03:12:48.891451 1011501 ssh_runner.go:195] Run: cat /version.json
	I0116 03:12:48.891477 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetSSHHostname
	I0116 03:12:48.894134 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | domain embed-certs-480663 has defined MAC address 52:54:00:1c:0e:bd in network mk-embed-certs-480663
	I0116 03:12:48.894220 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | domain embed-certs-480663 has defined MAC address 52:54:00:1c:0e:bd in network mk-embed-certs-480663
	I0116 03:12:48.894577 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:0e:bd", ip: ""} in network mk-embed-certs-480663: {Iface:virbr1 ExpiryTime:2024-01-16 04:04:18 +0000 UTC Type:0 Mac:52:54:00:1c:0e:bd Iaid: IPaddr:192.168.61.150 Prefix:24 Hostname:embed-certs-480663 Clientid:01:52:54:00:1c:0e:bd}
	I0116 03:12:48.894619 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | domain embed-certs-480663 has defined IP address 192.168.61.150 and MAC address 52:54:00:1c:0e:bd in network mk-embed-certs-480663
	I0116 03:12:48.894646 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:0e:bd", ip: ""} in network mk-embed-certs-480663: {Iface:virbr1 ExpiryTime:2024-01-16 04:04:18 +0000 UTC Type:0 Mac:52:54:00:1c:0e:bd Iaid: IPaddr:192.168.61.150 Prefix:24 Hostname:embed-certs-480663 Clientid:01:52:54:00:1c:0e:bd}
	I0116 03:12:48.894672 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | domain embed-certs-480663 has defined IP address 192.168.61.150 and MAC address 52:54:00:1c:0e:bd in network mk-embed-certs-480663
	I0116 03:12:48.894934 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetSSHPort
	I0116 03:12:48.894944 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetSSHPort
	I0116 03:12:48.895100 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetSSHKeyPath
	I0116 03:12:48.895122 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetSSHKeyPath
	I0116 03:12:48.895200 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetSSHUsername
	I0116 03:12:48.895270 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetSSHUsername
	I0116 03:12:48.895367 1011501 sshutil.go:53] new ssh client: &{IP:192.168.61.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/embed-certs-480663/id_rsa Username:docker}
	I0116 03:12:48.895401 1011501 sshutil.go:53] new ssh client: &{IP:192.168.61.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/embed-certs-480663/id_rsa Username:docker}
	I0116 03:12:48.979839 1011501 ssh_runner.go:195] Run: systemctl --version
	I0116 03:12:49.008683 1011501 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0116 03:12:49.161550 1011501 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0116 03:12:49.167838 1011501 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0116 03:12:49.167937 1011501 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0116 03:12:49.184428 1011501 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0116 03:12:49.184457 1011501 start.go:475] detecting cgroup driver to use...
	I0116 03:12:49.184542 1011501 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0116 03:12:49.202177 1011501 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0116 03:12:49.215021 1011501 docker.go:217] disabling cri-docker service (if available) ...
	I0116 03:12:49.215100 1011501 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0116 03:12:49.230944 1011501 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0116 03:12:49.245401 1011501 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0116 03:12:49.368410 1011501 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0116 03:12:49.490710 1011501 docker.go:233] disabling docker service ...
	I0116 03:12:49.490804 1011501 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0116 03:12:49.504462 1011501 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0116 03:12:49.515523 1011501 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0116 03:12:49.632751 1011501 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0116 03:12:49.769999 1011501 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0116 03:12:49.785053 1011501 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0116 03:12:49.803377 1011501 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0116 03:12:49.803436 1011501 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:12:49.812729 1011501 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0116 03:12:49.812804 1011501 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:12:49.822106 1011501 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:12:49.831270 1011501 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:12:49.840256 1011501 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0116 03:12:49.849610 1011501 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0116 03:12:49.858638 1011501 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0116 03:12:49.858713 1011501 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0116 03:12:49.872437 1011501 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0116 03:12:49.882932 1011501 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0116 03:12:50.003747 1011501 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0116 03:12:50.178808 1011501 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0116 03:12:50.178901 1011501 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0116 03:12:50.184631 1011501 start.go:543] Will wait 60s for crictl version
	I0116 03:12:50.184708 1011501 ssh_runner.go:195] Run: which crictl
	I0116 03:12:50.189104 1011501 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0116 03:12:50.226713 1011501 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0116 03:12:50.226833 1011501 ssh_runner.go:195] Run: crio --version
	I0116 03:12:50.285581 1011501 ssh_runner.go:195] Run: crio --version
	I0116 03:12:50.336274 1011501 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0116 03:12:50.337928 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetIP
	I0116 03:12:50.340938 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | domain embed-certs-480663 has defined MAC address 52:54:00:1c:0e:bd in network mk-embed-certs-480663
	I0116 03:12:50.341389 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:0e:bd", ip: ""} in network mk-embed-certs-480663: {Iface:virbr1 ExpiryTime:2024-01-16 04:04:18 +0000 UTC Type:0 Mac:52:54:00:1c:0e:bd Iaid: IPaddr:192.168.61.150 Prefix:24 Hostname:embed-certs-480663 Clientid:01:52:54:00:1c:0e:bd}
	I0116 03:12:50.341434 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | domain embed-certs-480663 has defined IP address 192.168.61.150 and MAC address 52:54:00:1c:0e:bd in network mk-embed-certs-480663
	I0116 03:12:50.341707 1011501 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0116 03:12:50.346116 1011501 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 03:12:50.358498 1011501 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0116 03:12:50.358562 1011501 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 03:12:50.399016 1011501 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0116 03:12:50.399102 1011501 ssh_runner.go:195] Run: which lz4
	I0116 03:12:50.403562 1011501 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0116 03:12:50.407754 1011501 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0116 03:12:50.407781 1011501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0116 03:12:52.338554 1011501 crio.go:444] Took 1.935021 seconds to copy over tarball
	I0116 03:12:52.338657 1011501 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0116 03:12:50.379220 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | domain old-k8s-version-788237 has defined MAC address 52:54:00:64:b7:2e in network mk-old-k8s-version-788237
	I0116 03:12:50.379668 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | unable to find current IP address of domain old-k8s-version-788237 in network mk-old-k8s-version-788237
	I0116 03:12:50.379707 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | I0116 03:12:50.379617 1012611 retry.go:31] will retry after 265.569137ms: waiting for machine to come up
	I0116 03:12:50.647311 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | domain old-k8s-version-788237 has defined MAC address 52:54:00:64:b7:2e in network mk-old-k8s-version-788237
	I0116 03:12:50.648272 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | unable to find current IP address of domain old-k8s-version-788237 in network mk-old-k8s-version-788237
	I0116 03:12:50.648308 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | I0116 03:12:50.648165 1012611 retry.go:31] will retry after 322.357919ms: waiting for machine to come up
	I0116 03:12:50.971860 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | domain old-k8s-version-788237 has defined MAC address 52:54:00:64:b7:2e in network mk-old-k8s-version-788237
	I0116 03:12:50.972437 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | unable to find current IP address of domain old-k8s-version-788237 in network mk-old-k8s-version-788237
	I0116 03:12:50.972466 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | I0116 03:12:50.972414 1012611 retry.go:31] will retry after 554.899929ms: waiting for machine to come up
	I0116 03:12:51.529304 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | domain old-k8s-version-788237 has defined MAC address 52:54:00:64:b7:2e in network mk-old-k8s-version-788237
	I0116 03:12:51.529854 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | unable to find current IP address of domain old-k8s-version-788237 in network mk-old-k8s-version-788237
	I0116 03:12:51.529881 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | I0116 03:12:51.529781 1012611 retry.go:31] will retry after 666.131492ms: waiting for machine to come up
	I0116 03:12:52.197244 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | domain old-k8s-version-788237 has defined MAC address 52:54:00:64:b7:2e in network mk-old-k8s-version-788237
	I0116 03:12:52.197715 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | unable to find current IP address of domain old-k8s-version-788237 in network mk-old-k8s-version-788237
	I0116 03:12:52.197747 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | I0116 03:12:52.197677 1012611 retry.go:31] will retry after 905.276637ms: waiting for machine to come up
	I0116 03:12:53.104496 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | domain old-k8s-version-788237 has defined MAC address 52:54:00:64:b7:2e in network mk-old-k8s-version-788237
	I0116 03:12:53.105075 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | unable to find current IP address of domain old-k8s-version-788237 in network mk-old-k8s-version-788237
	I0116 03:12:53.105113 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | I0116 03:12:53.105018 1012611 retry.go:31] will retry after 849.59257ms: waiting for machine to come up
	I0116 03:12:53.956756 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | domain old-k8s-version-788237 has defined MAC address 52:54:00:64:b7:2e in network mk-old-k8s-version-788237
	I0116 03:12:53.957265 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | unable to find current IP address of domain old-k8s-version-788237 in network mk-old-k8s-version-788237
	I0116 03:12:53.957310 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | I0116 03:12:53.957214 1012611 retry.go:31] will retry after 1.208772763s: waiting for machine to come up
	I0116 03:12:55.168258 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | domain old-k8s-version-788237 has defined MAC address 52:54:00:64:b7:2e in network mk-old-k8s-version-788237
	I0116 03:12:55.168715 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | unable to find current IP address of domain old-k8s-version-788237 in network mk-old-k8s-version-788237
	I0116 03:12:55.168750 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | I0116 03:12:55.168656 1012611 retry.go:31] will retry after 1.842317385s: waiting for machine to come up
	I0116 03:12:55.368146 1011501 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.02945237s)
	I0116 03:12:55.368186 1011501 crio.go:451] Took 3.029602 seconds to extract the tarball
	I0116 03:12:55.368197 1011501 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0116 03:12:55.409542 1011501 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 03:12:55.468263 1011501 crio.go:496] all images are preloaded for cri-o runtime.
	I0116 03:12:55.468298 1011501 cache_images.go:84] Images are preloaded, skipping loading
	I0116 03:12:55.468401 1011501 ssh_runner.go:195] Run: crio config
	I0116 03:12:55.534437 1011501 cni.go:84] Creating CNI manager for ""
	I0116 03:12:55.534473 1011501 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 03:12:55.534500 1011501 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0116 03:12:55.534554 1011501 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.150 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-480663 NodeName:embed-certs-480663 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.150"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.150 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0116 03:12:55.534761 1011501 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.150
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-480663"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.150
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.150"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0116 03:12:55.534856 1011501 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-480663 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.150
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-480663 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0116 03:12:55.534953 1011501 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0116 03:12:55.550549 1011501 binaries.go:44] Found k8s binaries, skipping transfer
	I0116 03:12:55.550643 1011501 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0116 03:12:55.560831 1011501 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0116 03:12:55.578611 1011501 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0116 03:12:55.600405 1011501 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I0116 03:12:55.620622 1011501 ssh_runner.go:195] Run: grep 192.168.61.150	control-plane.minikube.internal$ /etc/hosts
	I0116 03:12:55.625483 1011501 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.150	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 03:12:55.638353 1011501 certs.go:56] Setting up /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/embed-certs-480663 for IP: 192.168.61.150
	I0116 03:12:55.638404 1011501 certs.go:190] acquiring lock for shared ca certs: {Name:mk7c5920652340a030ac1e36e0fe21cc8437c491 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:12:55.638588 1011501 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17967-971255/.minikube/ca.key
	I0116 03:12:55.638649 1011501 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17967-971255/.minikube/proxy-client-ca.key
	I0116 03:12:55.638772 1011501 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/embed-certs-480663/client.key
	I0116 03:12:55.638852 1011501 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/embed-certs-480663/apiserver.key.2512ac4f
	I0116 03:12:55.638933 1011501 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/embed-certs-480663/proxy-client.key
	I0116 03:12:55.639122 1011501 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/home/jenkins/minikube-integration/17967-971255/.minikube/certs/978482.pem (1338 bytes)
	W0116 03:12:55.639164 1011501 certs.go:433] ignoring /home/jenkins/minikube-integration/17967-971255/.minikube/certs/home/jenkins/minikube-integration/17967-971255/.minikube/certs/978482_empty.pem, impossibly tiny 0 bytes
	I0116 03:12:55.639180 1011501 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca-key.pem (1675 bytes)
	I0116 03:12:55.639217 1011501 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca.pem (1082 bytes)
	I0116 03:12:55.639254 1011501 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/home/jenkins/minikube-integration/17967-971255/.minikube/certs/cert.pem (1123 bytes)
	I0116 03:12:55.639286 1011501 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/home/jenkins/minikube-integration/17967-971255/.minikube/certs/key.pem (1675 bytes)
	I0116 03:12:55.639341 1011501 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-971255/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17967-971255/.minikube/files/etc/ssl/certs/9784822.pem (1708 bytes)
	I0116 03:12:55.640395 1011501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/embed-certs-480663/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0116 03:12:55.667612 1011501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/embed-certs-480663/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0116 03:12:55.692576 1011501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/embed-certs-480663/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0116 03:12:55.717257 1011501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/embed-certs-480663/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0116 03:12:55.741983 1011501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0116 03:12:55.766577 1011501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0116 03:12:55.792372 1011501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0116 03:12:55.817385 1011501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0116 03:12:55.843037 1011501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/files/etc/ssl/certs/9784822.pem --> /usr/share/ca-certificates/9784822.pem (1708 bytes)
	I0116 03:12:55.873486 1011501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0116 03:12:55.898499 1011501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/certs/978482.pem --> /usr/share/ca-certificates/978482.pem (1338 bytes)
	I0116 03:12:55.925406 1011501 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0116 03:12:55.945389 1011501 ssh_runner.go:195] Run: openssl version
	I0116 03:12:55.951579 1011501 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9784822.pem && ln -fs /usr/share/ca-certificates/9784822.pem /etc/ssl/certs/9784822.pem"
	I0116 03:12:55.963228 1011501 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9784822.pem
	I0116 03:12:55.968375 1011501 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 16 02:10 /usr/share/ca-certificates/9784822.pem
	I0116 03:12:55.968448 1011501 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9784822.pem
	I0116 03:12:55.974792 1011501 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9784822.pem /etc/ssl/certs/3ec20f2e.0"
	I0116 03:12:55.986496 1011501 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0116 03:12:55.998112 1011501 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:12:56.003308 1011501 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 16 02:01 /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:12:56.003397 1011501 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:12:56.009406 1011501 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0116 03:12:56.022123 1011501 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/978482.pem && ln -fs /usr/share/ca-certificates/978482.pem /etc/ssl/certs/978482.pem"
	I0116 03:12:56.035041 1011501 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/978482.pem
	I0116 03:12:56.040564 1011501 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 16 02:10 /usr/share/ca-certificates/978482.pem
	I0116 03:12:56.040636 1011501 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/978482.pem
	I0116 03:12:56.047058 1011501 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/978482.pem /etc/ssl/certs/51391683.0"
	I0116 03:12:56.059998 1011501 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0116 03:12:56.065241 1011501 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0116 03:12:56.071918 1011501 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0116 03:12:56.078512 1011501 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0116 03:12:56.085645 1011501 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0116 03:12:56.092405 1011501 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0116 03:12:56.099010 1011501 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0116 03:12:56.105679 1011501 kubeadm.go:404] StartCluster: {Name:embed-certs-480663 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.4 ClusterName:embed-certs-480663 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.150 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 03:12:56.105773 1011501 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0116 03:12:56.105859 1011501 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 03:12:56.153053 1011501 cri.go:89] found id: ""
	I0116 03:12:56.153168 1011501 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0116 03:12:56.165415 1011501 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0116 03:12:56.165448 1011501 kubeadm.go:636] restartCluster start
	I0116 03:12:56.165516 1011501 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0116 03:12:56.175884 1011501 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:12:56.177147 1011501 kubeconfig.go:92] found "embed-certs-480663" server: "https://192.168.61.150:8443"
	I0116 03:12:56.179924 1011501 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0116 03:12:56.189868 1011501 api_server.go:166] Checking apiserver status ...
	I0116 03:12:56.189935 1011501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:12:56.202554 1011501 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:12:56.690001 1011501 api_server.go:166] Checking apiserver status ...
	I0116 03:12:56.690087 1011501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:12:56.702873 1011501 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:12:57.190439 1011501 api_server.go:166] Checking apiserver status ...
	I0116 03:12:57.190526 1011501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:12:57.203483 1011501 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:12:57.691004 1011501 api_server.go:166] Checking apiserver status ...
	I0116 03:12:57.691089 1011501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:12:57.705628 1011501 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:12:58.190127 1011501 api_server.go:166] Checking apiserver status ...
	I0116 03:12:58.190268 1011501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:12:58.203066 1011501 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:12:58.690714 1011501 api_server.go:166] Checking apiserver status ...
	I0116 03:12:58.690836 1011501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:12:58.703512 1011501 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:12:57.013734 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | domain old-k8s-version-788237 has defined MAC address 52:54:00:64:b7:2e in network mk-old-k8s-version-788237
	I0116 03:12:57.014338 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | unable to find current IP address of domain old-k8s-version-788237 in network mk-old-k8s-version-788237
	I0116 03:12:57.014374 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | I0116 03:12:57.014291 1012611 retry.go:31] will retry after 1.812964487s: waiting for machine to come up
	I0116 03:12:58.828551 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | domain old-k8s-version-788237 has defined MAC address 52:54:00:64:b7:2e in network mk-old-k8s-version-788237
	I0116 03:12:58.829042 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | unable to find current IP address of domain old-k8s-version-788237 in network mk-old-k8s-version-788237
	I0116 03:12:58.829068 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | I0116 03:12:58.828972 1012611 retry.go:31] will retry after 2.844481084s: waiting for machine to come up
	I0116 03:12:59.190193 1011501 api_server.go:166] Checking apiserver status ...
	I0116 03:12:59.190305 1011501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:12:59.202672 1011501 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:12:59.690192 1011501 api_server.go:166] Checking apiserver status ...
	I0116 03:12:59.690304 1011501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:12:59.702988 1011501 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:00.190097 1011501 api_server.go:166] Checking apiserver status ...
	I0116 03:13:00.190194 1011501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:00.202817 1011501 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:00.690356 1011501 api_server.go:166] Checking apiserver status ...
	I0116 03:13:00.690469 1011501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:00.703381 1011501 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:01.190016 1011501 api_server.go:166] Checking apiserver status ...
	I0116 03:13:01.190103 1011501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:01.205508 1011501 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:01.689888 1011501 api_server.go:166] Checking apiserver status ...
	I0116 03:13:01.689982 1011501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:01.706681 1011501 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:02.190049 1011501 api_server.go:166] Checking apiserver status ...
	I0116 03:13:02.190151 1011501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:02.206668 1011501 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:02.690222 1011501 api_server.go:166] Checking apiserver status ...
	I0116 03:13:02.690361 1011501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:02.706881 1011501 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:03.189909 1011501 api_server.go:166] Checking apiserver status ...
	I0116 03:13:03.190004 1011501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:03.203138 1011501 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:03.690789 1011501 api_server.go:166] Checking apiserver status ...
	I0116 03:13:03.690907 1011501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:03.703489 1011501 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:01.674784 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | domain old-k8s-version-788237 has defined MAC address 52:54:00:64:b7:2e in network mk-old-k8s-version-788237
	I0116 03:13:01.675368 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | unable to find current IP address of domain old-k8s-version-788237 in network mk-old-k8s-version-788237
	I0116 03:13:01.675395 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | I0116 03:13:01.675337 1012611 retry.go:31] will retry after 3.198176955s: waiting for machine to come up
	I0116 03:13:04.875399 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | domain old-k8s-version-788237 has defined MAC address 52:54:00:64:b7:2e in network mk-old-k8s-version-788237
	I0116 03:13:04.875880 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | unable to find current IP address of domain old-k8s-version-788237 in network mk-old-k8s-version-788237
	I0116 03:13:04.875911 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | I0116 03:13:04.875824 1012611 retry.go:31] will retry after 3.762316841s: waiting for machine to come up
	I0116 03:13:04.190804 1011501 api_server.go:166] Checking apiserver status ...
	I0116 03:13:04.190926 1011501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:04.203114 1011501 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:04.690805 1011501 api_server.go:166] Checking apiserver status ...
	I0116 03:13:04.690935 1011501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:04.703456 1011501 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:05.190648 1011501 api_server.go:166] Checking apiserver status ...
	I0116 03:13:05.190760 1011501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:05.203129 1011501 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:05.690744 1011501 api_server.go:166] Checking apiserver status ...
	I0116 03:13:05.690892 1011501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:05.703526 1011501 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:06.190070 1011501 api_server.go:166] Checking apiserver status ...
	I0116 03:13:06.190217 1011501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:06.202457 1011501 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:06.202494 1011501 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0116 03:13:06.202504 1011501 kubeadm.go:1135] stopping kube-system containers ...
	I0116 03:13:06.202517 1011501 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0116 03:13:06.202598 1011501 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 03:13:06.241146 1011501 cri.go:89] found id: ""
	I0116 03:13:06.241255 1011501 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0116 03:13:06.257465 1011501 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0116 03:13:06.267655 1011501 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0116 03:13:06.267728 1011501 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0116 03:13:06.277601 1011501 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0116 03:13:06.277628 1011501 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:13:06.388578 1011501 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:13:07.024945 1011501 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:13:07.210419 1011501 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:13:07.275175 1011501 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:13:07.353969 1011501 api_server.go:52] waiting for apiserver process to appear ...
	I0116 03:13:07.354074 1011501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:13:07.854253 1011501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:13:08.354855 1011501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:13:10.035188 1011955 start.go:369] acquired machines lock for "default-k8s-diff-port-775571" in 4m14.402660122s
	I0116 03:13:10.035270 1011955 start.go:96] Skipping create...Using existing machine configuration
	I0116 03:13:10.035278 1011955 fix.go:54] fixHost starting: 
	I0116 03:13:10.035719 1011955 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:13:10.035767 1011955 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:13:10.054435 1011955 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35337
	I0116 03:13:10.054968 1011955 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:13:10.055812 1011955 main.go:141] libmachine: Using API Version  1
	I0116 03:13:10.055849 1011955 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:13:10.056304 1011955 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:13:10.056546 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .DriverName
	I0116 03:13:10.056719 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetState
	I0116 03:13:10.058431 1011955 fix.go:102] recreateIfNeeded on default-k8s-diff-port-775571: state=Stopped err=<nil>
	I0116 03:13:10.058467 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .DriverName
	W0116 03:13:10.058666 1011955 fix.go:128] unexpected machine state, will restart: <nil>
	I0116 03:13:10.060742 1011955 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-775571" ...
	I0116 03:13:08.642785 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | domain old-k8s-version-788237 has defined MAC address 52:54:00:64:b7:2e in network mk-old-k8s-version-788237
	I0116 03:13:08.643327 1011681 main.go:141] libmachine: (old-k8s-version-788237) Found IP for machine: 192.168.39.91
	I0116 03:13:08.643356 1011681 main.go:141] libmachine: (old-k8s-version-788237) Reserving static IP address...
	I0116 03:13:08.643376 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | domain old-k8s-version-788237 has current primary IP address 192.168.39.91 and MAC address 52:54:00:64:b7:2e in network mk-old-k8s-version-788237
	I0116 03:13:08.643757 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | found host DHCP lease matching {name: "old-k8s-version-788237", mac: "52:54:00:64:b7:2e", ip: "192.168.39.91"} in network mk-old-k8s-version-788237: {Iface:virbr3 ExpiryTime:2024-01-16 04:13:01 +0000 UTC Type:0 Mac:52:54:00:64:b7:2e Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:old-k8s-version-788237 Clientid:01:52:54:00:64:b7:2e}
	I0116 03:13:08.643780 1011681 main.go:141] libmachine: (old-k8s-version-788237) Reserved static IP address: 192.168.39.91
	I0116 03:13:08.643798 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | skip adding static IP to network mk-old-k8s-version-788237 - found existing host DHCP lease matching {name: "old-k8s-version-788237", mac: "52:54:00:64:b7:2e", ip: "192.168.39.91"}
	I0116 03:13:08.643810 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | Getting to WaitForSSH function...
	I0116 03:13:08.643819 1011681 main.go:141] libmachine: (old-k8s-version-788237) Waiting for SSH to be available...
	I0116 03:13:08.646037 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | domain old-k8s-version-788237 has defined MAC address 52:54:00:64:b7:2e in network mk-old-k8s-version-788237
	I0116 03:13:08.646391 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:b7:2e", ip: ""} in network mk-old-k8s-version-788237: {Iface:virbr3 ExpiryTime:2024-01-16 04:13:01 +0000 UTC Type:0 Mac:52:54:00:64:b7:2e Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:old-k8s-version-788237 Clientid:01:52:54:00:64:b7:2e}
	I0116 03:13:08.646437 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | domain old-k8s-version-788237 has defined IP address 192.168.39.91 and MAC address 52:54:00:64:b7:2e in network mk-old-k8s-version-788237
	I0116 03:13:08.646519 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | Using SSH client type: external
	I0116 03:13:08.646553 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | Using SSH private key: /home/jenkins/minikube-integration/17967-971255/.minikube/machines/old-k8s-version-788237/id_rsa (-rw-------)
	I0116 03:13:08.646581 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.91 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17967-971255/.minikube/machines/old-k8s-version-788237/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0116 03:13:08.646591 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | About to run SSH command:
	I0116 03:13:08.646599 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | exit 0
	I0116 03:13:08.738009 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | SSH cmd err, output: <nil>: 
	I0116 03:13:08.738363 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetConfigRaw
	I0116 03:13:08.739116 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetIP
	I0116 03:13:08.741759 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | domain old-k8s-version-788237 has defined MAC address 52:54:00:64:b7:2e in network mk-old-k8s-version-788237
	I0116 03:13:08.742196 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:b7:2e", ip: ""} in network mk-old-k8s-version-788237: {Iface:virbr3 ExpiryTime:2024-01-16 04:13:01 +0000 UTC Type:0 Mac:52:54:00:64:b7:2e Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:old-k8s-version-788237 Clientid:01:52:54:00:64:b7:2e}
	I0116 03:13:08.742235 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | domain old-k8s-version-788237 has defined IP address 192.168.39.91 and MAC address 52:54:00:64:b7:2e in network mk-old-k8s-version-788237
	I0116 03:13:08.742479 1011681 profile.go:148] Saving config to /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/old-k8s-version-788237/config.json ...
	I0116 03:13:08.742682 1011681 machine.go:88] provisioning docker machine ...
	I0116 03:13:08.742701 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .DriverName
	I0116 03:13:08.742937 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetMachineName
	I0116 03:13:08.743154 1011681 buildroot.go:166] provisioning hostname "old-k8s-version-788237"
	I0116 03:13:08.743184 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetMachineName
	I0116 03:13:08.743338 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetSSHHostname
	I0116 03:13:08.745489 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | domain old-k8s-version-788237 has defined MAC address 52:54:00:64:b7:2e in network mk-old-k8s-version-788237
	I0116 03:13:08.745856 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:b7:2e", ip: ""} in network mk-old-k8s-version-788237: {Iface:virbr3 ExpiryTime:2024-01-16 04:13:01 +0000 UTC Type:0 Mac:52:54:00:64:b7:2e Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:old-k8s-version-788237 Clientid:01:52:54:00:64:b7:2e}
	I0116 03:13:08.745897 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | domain old-k8s-version-788237 has defined IP address 192.168.39.91 and MAC address 52:54:00:64:b7:2e in network mk-old-k8s-version-788237
	I0116 03:13:08.746073 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetSSHPort
	I0116 03:13:08.746292 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetSSHKeyPath
	I0116 03:13:08.746426 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetSSHKeyPath
	I0116 03:13:08.746580 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetSSHUsername
	I0116 03:13:08.746791 1011681 main.go:141] libmachine: Using SSH client type: native
	I0116 03:13:08.747298 1011681 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.91 22 <nil> <nil>}
	I0116 03:13:08.747322 1011681 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-788237 && echo "old-k8s-version-788237" | sudo tee /etc/hostname
	I0116 03:13:08.878928 1011681 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-788237
	
	I0116 03:13:08.878966 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetSSHHostname
	I0116 03:13:08.882019 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | domain old-k8s-version-788237 has defined MAC address 52:54:00:64:b7:2e in network mk-old-k8s-version-788237
	I0116 03:13:08.882417 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:b7:2e", ip: ""} in network mk-old-k8s-version-788237: {Iface:virbr3 ExpiryTime:2024-01-16 04:13:01 +0000 UTC Type:0 Mac:52:54:00:64:b7:2e Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:old-k8s-version-788237 Clientid:01:52:54:00:64:b7:2e}
	I0116 03:13:08.882468 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | domain old-k8s-version-788237 has defined IP address 192.168.39.91 and MAC address 52:54:00:64:b7:2e in network mk-old-k8s-version-788237
	I0116 03:13:08.882564 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetSSHPort
	I0116 03:13:08.882806 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetSSHKeyPath
	I0116 03:13:08.883022 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetSSHKeyPath
	I0116 03:13:08.883202 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetSSHUsername
	I0116 03:13:08.883384 1011681 main.go:141] libmachine: Using SSH client type: native
	I0116 03:13:08.883704 1011681 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.91 22 <nil> <nil>}
	I0116 03:13:08.883723 1011681 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-788237' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-788237/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-788237' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0116 03:13:09.011161 1011681 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0116 03:13:09.011209 1011681 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17967-971255/.minikube CaCertPath:/home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17967-971255/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17967-971255/.minikube}
	I0116 03:13:09.011245 1011681 buildroot.go:174] setting up certificates
	I0116 03:13:09.011261 1011681 provision.go:83] configureAuth start
	I0116 03:13:09.011275 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetMachineName
	I0116 03:13:09.011649 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetIP
	I0116 03:13:09.014580 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | domain old-k8s-version-788237 has defined MAC address 52:54:00:64:b7:2e in network mk-old-k8s-version-788237
	I0116 03:13:09.014920 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:b7:2e", ip: ""} in network mk-old-k8s-version-788237: {Iface:virbr3 ExpiryTime:2024-01-16 04:13:01 +0000 UTC Type:0 Mac:52:54:00:64:b7:2e Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:old-k8s-version-788237 Clientid:01:52:54:00:64:b7:2e}
	I0116 03:13:09.014954 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | domain old-k8s-version-788237 has defined IP address 192.168.39.91 and MAC address 52:54:00:64:b7:2e in network mk-old-k8s-version-788237
	I0116 03:13:09.015107 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetSSHHostname
	I0116 03:13:09.017381 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | domain old-k8s-version-788237 has defined MAC address 52:54:00:64:b7:2e in network mk-old-k8s-version-788237
	I0116 03:13:09.017701 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:b7:2e", ip: ""} in network mk-old-k8s-version-788237: {Iface:virbr3 ExpiryTime:2024-01-16 04:13:01 +0000 UTC Type:0 Mac:52:54:00:64:b7:2e Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:old-k8s-version-788237 Clientid:01:52:54:00:64:b7:2e}
	I0116 03:13:09.017731 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | domain old-k8s-version-788237 has defined IP address 192.168.39.91 and MAC address 52:54:00:64:b7:2e in network mk-old-k8s-version-788237
	I0116 03:13:09.017854 1011681 provision.go:138] copyHostCerts
	I0116 03:13:09.017937 1011681 exec_runner.go:144] found /home/jenkins/minikube-integration/17967-971255/.minikube/ca.pem, removing ...
	I0116 03:13:09.017951 1011681 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17967-971255/.minikube/ca.pem
	I0116 03:13:09.018028 1011681 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17967-971255/.minikube/ca.pem (1082 bytes)
	I0116 03:13:09.018175 1011681 exec_runner.go:144] found /home/jenkins/minikube-integration/17967-971255/.minikube/cert.pem, removing ...
	I0116 03:13:09.018190 1011681 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17967-971255/.minikube/cert.pem
	I0116 03:13:09.018223 1011681 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17967-971255/.minikube/cert.pem (1123 bytes)
	I0116 03:13:09.018307 1011681 exec_runner.go:144] found /home/jenkins/minikube-integration/17967-971255/.minikube/key.pem, removing ...
	I0116 03:13:09.018318 1011681 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17967-971255/.minikube/key.pem
	I0116 03:13:09.018342 1011681 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17967-971255/.minikube/key.pem (1675 bytes)
	I0116 03:13:09.018403 1011681 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17967-971255/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-788237 san=[192.168.39.91 192.168.39.91 localhost 127.0.0.1 minikube old-k8s-version-788237]
	I0116 03:13:09.280154 1011681 provision.go:172] copyRemoteCerts
	I0116 03:13:09.280224 1011681 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0116 03:13:09.280252 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetSSHHostname
	I0116 03:13:09.283485 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | domain old-k8s-version-788237 has defined MAC address 52:54:00:64:b7:2e in network mk-old-k8s-version-788237
	I0116 03:13:09.283829 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:b7:2e", ip: ""} in network mk-old-k8s-version-788237: {Iface:virbr3 ExpiryTime:2024-01-16 04:13:01 +0000 UTC Type:0 Mac:52:54:00:64:b7:2e Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:old-k8s-version-788237 Clientid:01:52:54:00:64:b7:2e}
	I0116 03:13:09.283862 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | domain old-k8s-version-788237 has defined IP address 192.168.39.91 and MAC address 52:54:00:64:b7:2e in network mk-old-k8s-version-788237
	I0116 03:13:09.284193 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetSSHPort
	I0116 03:13:09.284454 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetSSHKeyPath
	I0116 03:13:09.284599 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetSSHUsername
	I0116 03:13:09.284787 1011681 sshutil.go:53] new ssh client: &{IP:192.168.39.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/old-k8s-version-788237/id_rsa Username:docker}
	I0116 03:13:09.382440 1011681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0116 03:13:09.410373 1011681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0116 03:13:09.435625 1011681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0116 03:13:09.460028 1011681 provision.go:86] duration metric: configureAuth took 448.744455ms
	I0116 03:13:09.460066 1011681 buildroot.go:189] setting minikube options for container-runtime
	I0116 03:13:09.460309 1011681 config.go:182] Loaded profile config "old-k8s-version-788237": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0116 03:13:09.460422 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetSSHHostname
	I0116 03:13:09.463079 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | domain old-k8s-version-788237 has defined MAC address 52:54:00:64:b7:2e in network mk-old-k8s-version-788237
	I0116 03:13:09.463354 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:b7:2e", ip: ""} in network mk-old-k8s-version-788237: {Iface:virbr3 ExpiryTime:2024-01-16 04:13:01 +0000 UTC Type:0 Mac:52:54:00:64:b7:2e Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:old-k8s-version-788237 Clientid:01:52:54:00:64:b7:2e}
	I0116 03:13:09.463396 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | domain old-k8s-version-788237 has defined IP address 192.168.39.91 and MAC address 52:54:00:64:b7:2e in network mk-old-k8s-version-788237
	I0116 03:13:09.463526 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetSSHPort
	I0116 03:13:09.463784 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetSSHKeyPath
	I0116 03:13:09.464087 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetSSHKeyPath
	I0116 03:13:09.464272 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetSSHUsername
	I0116 03:13:09.464458 1011681 main.go:141] libmachine: Using SSH client type: native
	I0116 03:13:09.464814 1011681 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.91 22 <nil> <nil>}
	I0116 03:13:09.464838 1011681 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0116 03:13:09.783889 1011681 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0116 03:13:09.783923 1011681 machine.go:91] provisioned docker machine in 1.041225615s
	I0116 03:13:09.783938 1011681 start.go:300] post-start starting for "old-k8s-version-788237" (driver="kvm2")
	I0116 03:13:09.783955 1011681 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0116 03:13:09.783981 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .DriverName
	I0116 03:13:09.784410 1011681 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0116 03:13:09.784452 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetSSHHostname
	I0116 03:13:09.787427 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | domain old-k8s-version-788237 has defined MAC address 52:54:00:64:b7:2e in network mk-old-k8s-version-788237
	I0116 03:13:09.787841 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:b7:2e", ip: ""} in network mk-old-k8s-version-788237: {Iface:virbr3 ExpiryTime:2024-01-16 04:13:01 +0000 UTC Type:0 Mac:52:54:00:64:b7:2e Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:old-k8s-version-788237 Clientid:01:52:54:00:64:b7:2e}
	I0116 03:13:09.787879 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | domain old-k8s-version-788237 has defined IP address 192.168.39.91 and MAC address 52:54:00:64:b7:2e in network mk-old-k8s-version-788237
	I0116 03:13:09.788022 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetSSHPort
	I0116 03:13:09.788233 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetSSHKeyPath
	I0116 03:13:09.788409 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetSSHUsername
	I0116 03:13:09.788566 1011681 sshutil.go:53] new ssh client: &{IP:192.168.39.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/old-k8s-version-788237/id_rsa Username:docker}
	I0116 03:13:09.875964 1011681 ssh_runner.go:195] Run: cat /etc/os-release
	I0116 03:13:09.880665 1011681 info.go:137] Remote host: Buildroot 2021.02.12
	I0116 03:13:09.880700 1011681 filesync.go:126] Scanning /home/jenkins/minikube-integration/17967-971255/.minikube/addons for local assets ...
	I0116 03:13:09.880782 1011681 filesync.go:126] Scanning /home/jenkins/minikube-integration/17967-971255/.minikube/files for local assets ...
	I0116 03:13:09.880879 1011681 filesync.go:149] local asset: /home/jenkins/minikube-integration/17967-971255/.minikube/files/etc/ssl/certs/9784822.pem -> 9784822.pem in /etc/ssl/certs
	I0116 03:13:09.881013 1011681 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0116 03:13:09.890286 1011681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/files/etc/ssl/certs/9784822.pem --> /etc/ssl/certs/9784822.pem (1708 bytes)
	I0116 03:13:09.913554 1011681 start.go:303] post-start completed in 129.596487ms
	I0116 03:13:09.913586 1011681 fix.go:56] fixHost completed within 21.026657085s
	I0116 03:13:09.913610 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetSSHHostname
	I0116 03:13:09.916767 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | domain old-k8s-version-788237 has defined MAC address 52:54:00:64:b7:2e in network mk-old-k8s-version-788237
	I0116 03:13:09.917228 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:b7:2e", ip: ""} in network mk-old-k8s-version-788237: {Iface:virbr3 ExpiryTime:2024-01-16 04:13:01 +0000 UTC Type:0 Mac:52:54:00:64:b7:2e Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:old-k8s-version-788237 Clientid:01:52:54:00:64:b7:2e}
	I0116 03:13:09.917265 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | domain old-k8s-version-788237 has defined IP address 192.168.39.91 and MAC address 52:54:00:64:b7:2e in network mk-old-k8s-version-788237
	I0116 03:13:09.917551 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetSSHPort
	I0116 03:13:09.917759 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetSSHKeyPath
	I0116 03:13:09.918017 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetSSHKeyPath
	I0116 03:13:09.918222 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetSSHUsername
	I0116 03:13:09.918418 1011681 main.go:141] libmachine: Using SSH client type: native
	I0116 03:13:09.918793 1011681 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.91 22 <nil> <nil>}
	I0116 03:13:09.918816 1011681 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0116 03:13:10.035012 1011681 main.go:141] libmachine: SSH cmd err, output: <nil>: 1705374789.980840898
	
	I0116 03:13:10.035040 1011681 fix.go:206] guest clock: 1705374789.980840898
	I0116 03:13:10.035051 1011681 fix.go:219] Guest: 2024-01-16 03:13:09.980840898 +0000 UTC Remote: 2024-01-16 03:13:09.913590445 +0000 UTC m=+289.770143089 (delta=67.250453ms)
	I0116 03:13:10.035083 1011681 fix.go:190] guest clock delta is within tolerance: 67.250453ms
	I0116 03:13:10.035093 1011681 start.go:83] releasing machines lock for "old-k8s-version-788237", held for 21.148206908s
	I0116 03:13:10.035126 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .DriverName
	I0116 03:13:10.035410 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetIP
	I0116 03:13:10.038396 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | domain old-k8s-version-788237 has defined MAC address 52:54:00:64:b7:2e in network mk-old-k8s-version-788237
	I0116 03:13:10.038745 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:b7:2e", ip: ""} in network mk-old-k8s-version-788237: {Iface:virbr3 ExpiryTime:2024-01-16 04:13:01 +0000 UTC Type:0 Mac:52:54:00:64:b7:2e Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:old-k8s-version-788237 Clientid:01:52:54:00:64:b7:2e}
	I0116 03:13:10.038781 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | domain old-k8s-version-788237 has defined IP address 192.168.39.91 and MAC address 52:54:00:64:b7:2e in network mk-old-k8s-version-788237
	I0116 03:13:10.039048 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .DriverName
	I0116 03:13:10.039659 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .DriverName
	I0116 03:13:10.039881 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .DriverName
	I0116 03:13:10.039978 1011681 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0116 03:13:10.040024 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetSSHHostname
	I0116 03:13:10.040135 1011681 ssh_runner.go:195] Run: cat /version.json
	I0116 03:13:10.040160 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetSSHHostname
	I0116 03:13:10.043099 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | domain old-k8s-version-788237 has defined MAC address 52:54:00:64:b7:2e in network mk-old-k8s-version-788237
	I0116 03:13:10.043326 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | domain old-k8s-version-788237 has defined MAC address 52:54:00:64:b7:2e in network mk-old-k8s-version-788237
	I0116 03:13:10.043459 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:b7:2e", ip: ""} in network mk-old-k8s-version-788237: {Iface:virbr3 ExpiryTime:2024-01-16 04:13:01 +0000 UTC Type:0 Mac:52:54:00:64:b7:2e Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:old-k8s-version-788237 Clientid:01:52:54:00:64:b7:2e}
	I0116 03:13:10.043482 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | domain old-k8s-version-788237 has defined IP address 192.168.39.91 and MAC address 52:54:00:64:b7:2e in network mk-old-k8s-version-788237
	I0116 03:13:10.043655 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetSSHPort
	I0116 03:13:10.043756 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:b7:2e", ip: ""} in network mk-old-k8s-version-788237: {Iface:virbr3 ExpiryTime:2024-01-16 04:13:01 +0000 UTC Type:0 Mac:52:54:00:64:b7:2e Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:old-k8s-version-788237 Clientid:01:52:54:00:64:b7:2e}
	I0116 03:13:10.043802 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | domain old-k8s-version-788237 has defined IP address 192.168.39.91 and MAC address 52:54:00:64:b7:2e in network mk-old-k8s-version-788237
	I0116 03:13:10.044001 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetSSHPort
	I0116 03:13:10.044018 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetSSHKeyPath
	I0116 03:13:10.044241 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetSSHKeyPath
	I0116 03:13:10.044249 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetSSHUsername
	I0116 03:13:10.044409 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetSSHUsername
	I0116 03:13:10.044498 1011681 sshutil.go:53] new ssh client: &{IP:192.168.39.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/old-k8s-version-788237/id_rsa Username:docker}
	I0116 03:13:10.044528 1011681 sshutil.go:53] new ssh client: &{IP:192.168.39.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/old-k8s-version-788237/id_rsa Username:docker}
	I0116 03:13:10.131865 1011681 ssh_runner.go:195] Run: systemctl --version
	I0116 03:13:10.160343 1011681 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0116 03:13:10.062248 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .Start
	I0116 03:13:10.062475 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Ensuring networks are active...
	I0116 03:13:10.063470 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Ensuring network default is active
	I0116 03:13:10.063800 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Ensuring network mk-default-k8s-diff-port-775571 is active
	I0116 03:13:10.064263 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Getting domain xml...
	I0116 03:13:10.065010 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Creating domain...
	I0116 03:13:10.316936 1011681 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0116 03:13:10.324330 1011681 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0116 03:13:10.324409 1011681 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0116 03:13:10.343057 1011681 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0116 03:13:10.343090 1011681 start.go:475] detecting cgroup driver to use...
	I0116 03:13:10.343184 1011681 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0116 03:13:10.359325 1011681 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0116 03:13:10.377310 1011681 docker.go:217] disabling cri-docker service (if available) ...
	I0116 03:13:10.377386 1011681 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0116 03:13:10.396512 1011681 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0116 03:13:10.416458 1011681 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0116 03:13:10.540518 1011681 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0116 03:13:10.671885 1011681 docker.go:233] disabling docker service ...
	I0116 03:13:10.672042 1011681 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0116 03:13:10.689182 1011681 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0116 03:13:10.705235 1011681 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0116 03:13:10.826545 1011681 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0116 03:13:10.941453 1011681 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0116 03:13:10.954337 1011681 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0116 03:13:10.974814 1011681 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I0116 03:13:10.974894 1011681 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:13:10.984741 1011681 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0116 03:13:10.984811 1011681 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:13:10.994451 1011681 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:13:11.004459 1011681 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:13:11.014409 1011681 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0116 03:13:11.025057 1011681 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0116 03:13:11.033911 1011681 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0116 03:13:11.034003 1011681 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0116 03:13:11.048044 1011681 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0116 03:13:11.056724 1011681 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0116 03:13:11.180914 1011681 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0116 03:13:11.369876 1011681 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0116 03:13:11.369971 1011681 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0116 03:13:11.375568 1011681 start.go:543] Will wait 60s for crictl version
	I0116 03:13:11.375638 1011681 ssh_runner.go:195] Run: which crictl
	I0116 03:13:11.379992 1011681 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0116 03:13:11.422734 1011681 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0116 03:13:11.422837 1011681 ssh_runner.go:195] Run: crio --version
	I0116 03:13:11.477909 1011681 ssh_runner.go:195] Run: crio --version
	I0116 03:13:11.536220 1011681 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I0116 03:13:08.855145 1011501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:13:09.355119 1011501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:13:09.854553 1011501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:13:09.882463 1011501 api_server.go:72] duration metric: took 2.528495988s to wait for apiserver process to appear ...
	I0116 03:13:09.882491 1011501 api_server.go:88] waiting for apiserver healthz status ...
	I0116 03:13:09.882516 1011501 api_server.go:253] Checking apiserver healthz at https://192.168.61.150:8443/healthz ...
	I0116 03:13:09.883135 1011501 api_server.go:269] stopped: https://192.168.61.150:8443/healthz: Get "https://192.168.61.150:8443/healthz": dial tcp 192.168.61.150:8443: connect: connection refused
	I0116 03:13:10.382909 1011501 api_server.go:253] Checking apiserver healthz at https://192.168.61.150:8443/healthz ...
	I0116 03:13:11.537589 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetIP
	I0116 03:13:11.540815 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | domain old-k8s-version-788237 has defined MAC address 52:54:00:64:b7:2e in network mk-old-k8s-version-788237
	I0116 03:13:11.541169 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:b7:2e", ip: ""} in network mk-old-k8s-version-788237: {Iface:virbr3 ExpiryTime:2024-01-16 04:13:01 +0000 UTC Type:0 Mac:52:54:00:64:b7:2e Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:old-k8s-version-788237 Clientid:01:52:54:00:64:b7:2e}
	I0116 03:13:11.541199 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | domain old-k8s-version-788237 has defined IP address 192.168.39.91 and MAC address 52:54:00:64:b7:2e in network mk-old-k8s-version-788237
	I0116 03:13:11.541459 1011681 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0116 03:13:11.546215 1011681 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 03:13:11.562291 1011681 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0116 03:13:11.562378 1011681 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 03:13:11.603542 1011681 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0116 03:13:11.603627 1011681 ssh_runner.go:195] Run: which lz4
	I0116 03:13:11.607873 1011681 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0116 03:13:11.613536 1011681 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0116 03:13:11.613577 1011681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I0116 03:13:13.454225 1011681 crio.go:444] Took 1.846391 seconds to copy over tarball
	I0116 03:13:13.454334 1011681 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0116 03:13:11.425638 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Waiting to get IP...
	I0116 03:13:11.426748 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | domain default-k8s-diff-port-775571 has defined MAC address 52:54:00:4b:bc:45 in network mk-default-k8s-diff-port-775571
	I0116 03:13:11.427214 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | unable to find current IP address of domain default-k8s-diff-port-775571 in network mk-default-k8s-diff-port-775571
	I0116 03:13:11.427314 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | I0116 03:13:11.427187 1012757 retry.go:31] will retry after 234.45504ms: waiting for machine to come up
	I0116 03:13:11.663924 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | domain default-k8s-diff-port-775571 has defined MAC address 52:54:00:4b:bc:45 in network mk-default-k8s-diff-port-775571
	I0116 03:13:11.664619 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | unable to find current IP address of domain default-k8s-diff-port-775571 in network mk-default-k8s-diff-port-775571
	I0116 03:13:11.664664 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | I0116 03:13:11.664556 1012757 retry.go:31] will retry after 318.711044ms: waiting for machine to come up
	I0116 03:13:11.985398 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | domain default-k8s-diff-port-775571 has defined MAC address 52:54:00:4b:bc:45 in network mk-default-k8s-diff-port-775571
	I0116 03:13:11.985941 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | unable to find current IP address of domain default-k8s-diff-port-775571 in network mk-default-k8s-diff-port-775571
	I0116 03:13:11.985978 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | I0116 03:13:11.985917 1012757 retry.go:31] will retry after 463.405848ms: waiting for machine to come up
	I0116 03:13:12.450776 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | domain default-k8s-diff-port-775571 has defined MAC address 52:54:00:4b:bc:45 in network mk-default-k8s-diff-port-775571
	I0116 03:13:12.451335 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | unable to find current IP address of domain default-k8s-diff-port-775571 in network mk-default-k8s-diff-port-775571
	I0116 03:13:12.451361 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | I0116 03:13:12.451270 1012757 retry.go:31] will retry after 428.299543ms: waiting for machine to come up
	I0116 03:13:12.881383 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | domain default-k8s-diff-port-775571 has defined MAC address 52:54:00:4b:bc:45 in network mk-default-k8s-diff-port-775571
	I0116 03:13:12.881910 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | unable to find current IP address of domain default-k8s-diff-port-775571 in network mk-default-k8s-diff-port-775571
	I0116 03:13:12.881946 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | I0116 03:13:12.881856 1012757 retry.go:31] will retry after 564.023978ms: waiting for machine to come up
	I0116 03:13:13.447917 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | domain default-k8s-diff-port-775571 has defined MAC address 52:54:00:4b:bc:45 in network mk-default-k8s-diff-port-775571
	I0116 03:13:13.448436 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | unable to find current IP address of domain default-k8s-diff-port-775571 in network mk-default-k8s-diff-port-775571
	I0116 03:13:13.448492 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | I0116 03:13:13.448405 1012757 retry.go:31] will retry after 694.298162ms: waiting for machine to come up
	I0116 03:13:14.144469 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | domain default-k8s-diff-port-775571 has defined MAC address 52:54:00:4b:bc:45 in network mk-default-k8s-diff-port-775571
	I0116 03:13:14.145037 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | unable to find current IP address of domain default-k8s-diff-port-775571 in network mk-default-k8s-diff-port-775571
	I0116 03:13:14.145084 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | I0116 03:13:14.144953 1012757 retry.go:31] will retry after 821.505467ms: waiting for machine to come up
	I0116 03:13:14.967941 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | domain default-k8s-diff-port-775571 has defined MAC address 52:54:00:4b:bc:45 in network mk-default-k8s-diff-port-775571
	I0116 03:13:14.968577 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | unable to find current IP address of domain default-k8s-diff-port-775571 in network mk-default-k8s-diff-port-775571
	I0116 03:13:14.968611 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | I0116 03:13:14.968486 1012757 retry.go:31] will retry after 1.079929031s: waiting for machine to come up
	I0116 03:13:14.175997 1011501 api_server.go:279] https://192.168.61.150:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0116 03:13:14.176046 1011501 api_server.go:103] status: https://192.168.61.150:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0116 03:13:14.176064 1011501 api_server.go:253] Checking apiserver healthz at https://192.168.61.150:8443/healthz ...
	I0116 03:13:14.244918 1011501 api_server.go:279] https://192.168.61.150:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0116 03:13:14.244979 1011501 api_server.go:103] status: https://192.168.61.150:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0116 03:13:14.383226 1011501 api_server.go:253] Checking apiserver healthz at https://192.168.61.150:8443/healthz ...
	I0116 03:13:14.390006 1011501 api_server.go:279] https://192.168.61.150:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0116 03:13:14.390047 1011501 api_server.go:103] status: https://192.168.61.150:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0116 03:13:14.883209 1011501 api_server.go:253] Checking apiserver healthz at https://192.168.61.150:8443/healthz ...
	I0116 03:13:14.889127 1011501 api_server.go:279] https://192.168.61.150:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0116 03:13:14.889170 1011501 api_server.go:103] status: https://192.168.61.150:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0116 03:13:15.382688 1011501 api_server.go:253] Checking apiserver healthz at https://192.168.61.150:8443/healthz ...
	I0116 03:13:15.399515 1011501 api_server.go:279] https://192.168.61.150:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0116 03:13:15.399554 1011501 api_server.go:103] status: https://192.168.61.150:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0116 03:13:15.883088 1011501 api_server.go:253] Checking apiserver healthz at https://192.168.61.150:8443/healthz ...
	I0116 03:13:15.891853 1011501 api_server.go:279] https://192.168.61.150:8443/healthz returned 200:
	ok
	I0116 03:13:15.905636 1011501 api_server.go:141] control plane version: v1.28.4
	I0116 03:13:15.905683 1011501 api_server.go:131] duration metric: took 6.023183183s to wait for apiserver health ...
	I0116 03:13:15.905697 1011501 cni.go:84] Creating CNI manager for ""
	I0116 03:13:15.905706 1011501 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 03:13:15.907935 1011501 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0116 03:13:15.909466 1011501 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0116 03:13:15.922375 1011501 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0116 03:13:15.952930 1011501 system_pods.go:43] waiting for kube-system pods to appear ...
	I0116 03:13:15.964437 1011501 system_pods.go:59] 8 kube-system pods found
	I0116 03:13:15.964485 1011501 system_pods.go:61] "coredns-5dd5756b68-stqh5" [adbcef96-218b-42ed-9daf-72c274be0690] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0116 03:13:15.964494 1011501 system_pods.go:61] "etcd-embed-certs-480663" [6694af11-3b1a-4b84-adcb-3416b87a076f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0116 03:13:15.964502 1011501 system_pods.go:61] "kube-apiserver-embed-certs-480663" [3a3d0e5d-35eb-4f2f-9686-b17af19bc777] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0116 03:13:15.964508 1011501 system_pods.go:61] "kube-controller-manager-embed-certs-480663" [729b671f-23b9-409b-9f11-d6992f2355c7] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0116 03:13:15.964514 1011501 system_pods.go:61] "kube-proxy-j4786" [aabb98a7-fe55-4105-a5d2-c1e312464107] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0116 03:13:15.964520 1011501 system_pods.go:61] "kube-scheduler-embed-certs-480663" [11baa5d7-6e7b-4a25-be6f-e7975b51023d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0116 03:13:15.964525 1011501 system_pods.go:61] "metrics-server-57f55c9bc5-7d2fh" [512cf579-f335-4995-8721-74bb84da776e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:13:15.964541 1011501 system_pods.go:61] "storage-provisioner" [da59ff59-869f-48a9-a5c5-c95bb807cbcf] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0116 03:13:15.964549 1011501 system_pods.go:74] duration metric: took 11.584104ms to wait for pod list to return data ...
	I0116 03:13:15.964560 1011501 node_conditions.go:102] verifying NodePressure condition ...
	I0116 03:13:15.971265 1011501 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 03:13:15.971310 1011501 node_conditions.go:123] node cpu capacity is 2
	I0116 03:13:15.971324 1011501 node_conditions.go:105] duration metric: took 6.758143ms to run NodePressure ...
	I0116 03:13:15.971346 1011501 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:13:16.332558 1011501 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0116 03:13:16.343354 1011501 kubeadm.go:787] kubelet initialised
	I0116 03:13:16.343392 1011501 kubeadm.go:788] duration metric: took 10.793951ms waiting for restarted kubelet to initialise ...
	I0116 03:13:16.343403 1011501 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:13:16.370777 1011501 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-stqh5" in "kube-system" namespace to be "Ready" ...
	I0116 03:13:16.393556 1011501 pod_ready.go:97] node "embed-certs-480663" hosting pod "coredns-5dd5756b68-stqh5" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-480663" has status "Ready":"False"
	I0116 03:13:16.393599 1011501 pod_ready.go:81] duration metric: took 22.772202ms waiting for pod "coredns-5dd5756b68-stqh5" in "kube-system" namespace to be "Ready" ...
	E0116 03:13:16.393613 1011501 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-480663" hosting pod "coredns-5dd5756b68-stqh5" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-480663" has status "Ready":"False"
	I0116 03:13:16.393622 1011501 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-480663" in "kube-system" namespace to be "Ready" ...
	I0116 03:13:16.410313 1011501 pod_ready.go:97] node "embed-certs-480663" hosting pod "etcd-embed-certs-480663" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-480663" has status "Ready":"False"
	I0116 03:13:16.410355 1011501 pod_ready.go:81] duration metric: took 16.72056ms waiting for pod "etcd-embed-certs-480663" in "kube-system" namespace to be "Ready" ...
	E0116 03:13:16.410371 1011501 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-480663" hosting pod "etcd-embed-certs-480663" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-480663" has status "Ready":"False"
	I0116 03:13:16.410380 1011501 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-480663" in "kube-system" namespace to be "Ready" ...
	I0116 03:13:16.422777 1011501 pod_ready.go:97] node "embed-certs-480663" hosting pod "kube-apiserver-embed-certs-480663" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-480663" has status "Ready":"False"
	I0116 03:13:16.422819 1011501 pod_ready.go:81] duration metric: took 12.426537ms waiting for pod "kube-apiserver-embed-certs-480663" in "kube-system" namespace to be "Ready" ...
	E0116 03:13:16.422834 1011501 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-480663" hosting pod "kube-apiserver-embed-certs-480663" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-480663" has status "Ready":"False"
	I0116 03:13:16.422843 1011501 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-480663" in "kube-system" namespace to be "Ready" ...
	I0116 03:13:16.434722 1011501 pod_ready.go:97] node "embed-certs-480663" hosting pod "kube-controller-manager-embed-certs-480663" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-480663" has status "Ready":"False"
	I0116 03:13:16.434760 1011501 pod_ready.go:81] duration metric: took 11.904523ms waiting for pod "kube-controller-manager-embed-certs-480663" in "kube-system" namespace to be "Ready" ...
	E0116 03:13:16.434773 1011501 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-480663" hosting pod "kube-controller-manager-embed-certs-480663" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-480663" has status "Ready":"False"
	I0116 03:13:16.434783 1011501 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-j4786" in "kube-system" namespace to be "Ready" ...
	I0116 03:13:17.092534 1011501 pod_ready.go:97] node "embed-certs-480663" hosting pod "kube-proxy-j4786" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-480663" has status "Ready":"False"
	I0116 03:13:17.092568 1011501 pod_ready.go:81] duration metric: took 657.773691ms waiting for pod "kube-proxy-j4786" in "kube-system" namespace to be "Ready" ...
	E0116 03:13:17.092581 1011501 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-480663" hosting pod "kube-proxy-j4786" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-480663" has status "Ready":"False"
	I0116 03:13:17.092590 1011501 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-480663" in "kube-system" namespace to be "Ready" ...
	I0116 03:13:17.158257 1011501 pod_ready.go:97] node "embed-certs-480663" hosting pod "kube-scheduler-embed-certs-480663" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-480663" has status "Ready":"False"
	I0116 03:13:17.158294 1011501 pod_ready.go:81] duration metric: took 65.69466ms waiting for pod "kube-scheduler-embed-certs-480663" in "kube-system" namespace to be "Ready" ...
	E0116 03:13:17.158308 1011501 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-480663" hosting pod "kube-scheduler-embed-certs-480663" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-480663" has status "Ready":"False"
	I0116 03:13:17.158317 1011501 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace to be "Ready" ...
	I0116 03:13:17.872108 1011501 pod_ready.go:97] node "embed-certs-480663" hosting pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-480663" has status "Ready":"False"
	I0116 03:13:17.872149 1011501 pod_ready.go:81] duration metric: took 713.820621ms waiting for pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace to be "Ready" ...
	E0116 03:13:17.872162 1011501 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-480663" hosting pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-480663" has status "Ready":"False"
	I0116 03:13:17.872171 1011501 pod_ready.go:38] duration metric: took 1.528756103s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:13:17.872202 1011501 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0116 03:13:17.890580 1011501 ops.go:34] apiserver oom_adj: -16
	I0116 03:13:17.890613 1011501 kubeadm.go:640] restartCluster took 21.725155834s
	I0116 03:13:17.890626 1011501 kubeadm.go:406] StartCluster complete in 21.784958156s
	I0116 03:13:17.890693 1011501 settings.go:142] acquiring lock: {Name:mk39c69fb5454efd5afab021c02c3bec1f1b4e6b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:13:17.890792 1011501 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17967-971255/kubeconfig
	I0116 03:13:17.893858 1011501 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17967-971255/kubeconfig: {Name:mk5ca3018274b0a03599cf3631785c0629f81a67 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:13:18.133588 1011501 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0116 03:13:18.133712 1011501 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0116 03:13:18.133875 1011501 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-480663"
	I0116 03:13:18.133878 1011501 addons.go:69] Setting metrics-server=true in profile "embed-certs-480663"
	I0116 03:13:18.133911 1011501 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-480663"
	I0116 03:13:18.133906 1011501 addons.go:69] Setting default-storageclass=true in profile "embed-certs-480663"
	I0116 03:13:18.133920 1011501 addons.go:234] Setting addon metrics-server=true in "embed-certs-480663"
	W0116 03:13:18.133924 1011501 addons.go:243] addon storage-provisioner should already be in state true
	W0116 03:13:18.133932 1011501 addons.go:243] addon metrics-server should already be in state true
	I0116 03:13:18.133939 1011501 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-480663"
	I0116 03:13:18.133951 1011501 config.go:182] Loaded profile config "embed-certs-480663": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 03:13:18.133990 1011501 host.go:66] Checking if "embed-certs-480663" exists ...
	I0116 03:13:18.133990 1011501 host.go:66] Checking if "embed-certs-480663" exists ...
	I0116 03:13:18.134422 1011501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:13:18.134435 1011501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:13:18.134441 1011501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:13:18.134458 1011501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:13:18.134482 1011501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:13:18.134496 1011501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:13:18.152772 1011501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36843
	I0116 03:13:18.153335 1011501 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:13:18.153822 1011501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36539
	I0116 03:13:18.153952 1011501 main.go:141] libmachine: Using API Version  1
	I0116 03:13:18.153978 1011501 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:13:18.153953 1011501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44719
	I0116 03:13:18.154272 1011501 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:13:18.154435 1011501 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:13:18.154637 1011501 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:13:18.154836 1011501 main.go:141] libmachine: Using API Version  1
	I0116 03:13:18.154860 1011501 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:13:18.154956 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetState
	I0116 03:13:18.155092 1011501 main.go:141] libmachine: Using API Version  1
	I0116 03:13:18.155118 1011501 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:13:18.155183 1011501 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:13:18.155408 1011501 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:13:18.155884 1011501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:13:18.155939 1011501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:13:18.155953 1011501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:13:18.155985 1011501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:13:18.159097 1011501 addons.go:234] Setting addon default-storageclass=true in "embed-certs-480663"
	W0116 03:13:18.159139 1011501 addons.go:243] addon default-storageclass should already be in state true
	I0116 03:13:18.159175 1011501 host.go:66] Checking if "embed-certs-480663" exists ...
	I0116 03:13:18.159631 1011501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:13:18.159709 1011501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:13:18.176336 1011501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34615
	I0116 03:13:18.177044 1011501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35519
	I0116 03:13:18.177237 1011501 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:13:18.177646 1011501 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:13:18.177946 1011501 main.go:141] libmachine: Using API Version  1
	I0116 03:13:18.177971 1011501 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:13:18.178455 1011501 main.go:141] libmachine: Using API Version  1
	I0116 03:13:18.178505 1011501 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:13:18.178538 1011501 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:13:18.178951 1011501 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:13:18.178981 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetState
	I0116 03:13:18.179150 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetState
	I0116 03:13:18.179705 1011501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34121
	I0116 03:13:18.180094 1011501 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:13:18.180921 1011501 main.go:141] libmachine: Using API Version  1
	I0116 03:13:18.180934 1011501 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:13:18.181286 1011501 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:13:18.181902 1011501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:13:18.181925 1011501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:13:18.182091 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .DriverName
	I0116 03:13:18.182301 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .DriverName
	I0116 03:13:18.302482 1011501 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 03:13:18.202219 1011501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40999
	I0116 03:13:18.581432 1011501 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0116 03:13:18.581416 1011501 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 03:13:18.709000 1011501 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0116 03:13:18.582081 1011501 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:13:18.709096 1011501 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0116 03:13:18.709126 1011501 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0116 03:13:18.709154 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetSSHHostname
	I0116 03:13:18.586643 1011501 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-480663" context rescaled to 1 replicas
	I0116 03:13:18.709184 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetSSHHostname
	I0116 03:13:18.709223 1011501 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.150 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0116 03:13:18.588936 1011501 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0116 03:13:18.709955 1011501 main.go:141] libmachine: Using API Version  1
	I0116 03:13:18.713092 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | domain embed-certs-480663 has defined MAC address 52:54:00:1c:0e:bd in network mk-embed-certs-480663
	I0116 03:13:18.713501 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | domain embed-certs-480663 has defined MAC address 52:54:00:1c:0e:bd in network mk-embed-certs-480663
	I0116 03:13:18.713740 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetSSHPort
	I0116 03:13:18.714270 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetSSHPort
	I0116 03:13:18.722911 1011501 out.go:177] * Verifying Kubernetes components...
	I0116 03:13:18.722952 1011501 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:13:18.723026 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:0e:bd", ip: ""} in network mk-embed-certs-480663: {Iface:virbr1 ExpiryTime:2024-01-16 04:04:18 +0000 UTC Type:0 Mac:52:54:00:1c:0e:bd Iaid: IPaddr:192.168.61.150 Prefix:24 Hostname:embed-certs-480663 Clientid:01:52:54:00:1c:0e:bd}
	I0116 03:13:18.723078 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:0e:bd", ip: ""} in network mk-embed-certs-480663: {Iface:virbr1 ExpiryTime:2024-01-16 04:04:18 +0000 UTC Type:0 Mac:52:54:00:1c:0e:bd Iaid: IPaddr:192.168.61.150 Prefix:24 Hostname:embed-certs-480663 Clientid:01:52:54:00:1c:0e:bd}
	I0116 03:13:18.724877 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | domain embed-certs-480663 has defined IP address 192.168.61.150 and MAC address 52:54:00:1c:0e:bd in network mk-embed-certs-480663
	I0116 03:13:18.723318 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetSSHKeyPath
	I0116 03:13:18.724891 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | domain embed-certs-480663 has defined IP address 192.168.61.150 and MAC address 52:54:00:1c:0e:bd in network mk-embed-certs-480663
	I0116 03:13:18.723318 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetSSHKeyPath
	I0116 03:13:18.724748 1011501 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 03:13:18.725164 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetSSHUsername
	I0116 03:13:18.725165 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetSSHUsername
	I0116 03:13:18.725281 1011501 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:13:18.725333 1011501 sshutil.go:53] new ssh client: &{IP:192.168.61.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/embed-certs-480663/id_rsa Username:docker}
	I0116 03:13:18.725384 1011501 sshutil.go:53] new ssh client: &{IP:192.168.61.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/embed-certs-480663/id_rsa Username:docker}
	I0116 03:13:18.725507 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetState
	I0116 03:13:18.727468 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .DriverName
	I0116 03:13:18.727734 1011501 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0116 03:13:18.727754 1011501 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0116 03:13:18.727774 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetSSHHostname
	I0116 03:13:18.730959 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | domain embed-certs-480663 has defined MAC address 52:54:00:1c:0e:bd in network mk-embed-certs-480663
	I0116 03:13:18.731419 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:0e:bd", ip: ""} in network mk-embed-certs-480663: {Iface:virbr1 ExpiryTime:2024-01-16 04:04:18 +0000 UTC Type:0 Mac:52:54:00:1c:0e:bd Iaid: IPaddr:192.168.61.150 Prefix:24 Hostname:embed-certs-480663 Clientid:01:52:54:00:1c:0e:bd}
	I0116 03:13:18.731488 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | domain embed-certs-480663 has defined IP address 192.168.61.150 and MAC address 52:54:00:1c:0e:bd in network mk-embed-certs-480663
	I0116 03:13:18.731819 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetSSHPort
	I0116 03:13:18.732013 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetSSHKeyPath
	I0116 03:13:18.732162 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .GetSSHUsername
	I0116 03:13:18.732328 1011501 sshutil.go:53] new ssh client: &{IP:192.168.61.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/embed-certs-480663/id_rsa Username:docker}
	I0116 03:13:18.750255 1011501 node_ready.go:35] waiting up to 6m0s for node "embed-certs-480663" to be "Ready" ...
	I0116 03:13:16.997115 1011681 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.542741465s)
	I0116 03:13:16.997156 1011681 crio.go:451] Took 3.542892 seconds to extract the tarball
	I0116 03:13:16.997169 1011681 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0116 03:13:17.046929 1011681 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 03:13:17.098255 1011681 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0116 03:13:17.098280 1011681 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0116 03:13:17.098386 1011681 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 03:13:17.098392 1011681 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0116 03:13:17.098461 1011681 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0116 03:13:17.098503 1011681 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0116 03:13:17.098391 1011681 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0116 03:13:17.098621 1011681 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0116 03:13:17.098462 1011681 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0116 03:13:17.098390 1011681 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0116 03:13:17.100000 1011681 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0116 03:13:17.100009 1011681 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0116 03:13:17.100019 1011681 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0116 03:13:17.100039 1011681 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0116 03:13:17.100005 1011681 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 03:13:17.100438 1011681 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0116 03:13:17.100461 1011681 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0116 03:13:17.100666 1011681 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0116 03:13:17.256272 1011681 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0116 03:13:17.256286 1011681 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0116 03:13:17.258442 1011681 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0116 03:13:17.259457 1011681 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0116 03:13:17.264044 1011681 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0116 03:13:17.267216 1011681 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0116 03:13:17.274663 1011681 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0116 03:13:17.423339 1011681 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 03:13:17.423697 1011681 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0116 03:13:17.423773 1011681 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I0116 03:13:17.423813 1011681 ssh_runner.go:195] Run: which crictl
	I0116 03:13:17.460324 1011681 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0116 03:13:17.460382 1011681 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I0116 03:13:17.460441 1011681 ssh_runner.go:195] Run: which crictl
	I0116 03:13:17.483883 1011681 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0116 03:13:17.483936 1011681 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0116 03:13:17.483999 1011681 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0116 03:13:17.484066 1011681 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0116 03:13:17.484087 1011681 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0116 03:13:17.484104 1011681 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0116 03:13:17.484135 1011681 ssh_runner.go:195] Run: which crictl
	I0116 03:13:17.484007 1011681 ssh_runner.go:195] Run: which crictl
	I0116 03:13:17.484144 1011681 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0116 03:13:17.484142 1011681 ssh_runner.go:195] Run: which crictl
	I0116 03:13:17.484166 1011681 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0116 03:13:17.484211 1011681 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0116 03:13:17.484237 1011681 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0116 03:13:17.484284 1011681 ssh_runner.go:195] Run: which crictl
	I0116 03:13:17.484243 1011681 ssh_runner.go:195] Run: which crictl
	I0116 03:13:17.613454 1011681 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I0116 03:13:17.613555 1011681 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I0116 03:13:17.613587 1011681 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0116 03:13:17.613625 1011681 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I0116 03:13:17.613651 1011681 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0116 03:13:17.613689 1011681 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I0116 03:13:17.613759 1011681 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0116 03:13:17.776287 1011681 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17967-971255/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0116 03:13:17.787958 1011681 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17967-971255/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0116 03:13:17.788016 1011681 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17967-971255/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0116 03:13:17.788096 1011681 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17967-971255/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0116 03:13:17.791623 1011681 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17967-971255/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0116 03:13:17.791754 1011681 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17967-971255/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0116 03:13:17.791815 1011681 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17967-971255/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0116 03:13:17.791858 1011681 cache_images.go:92] LoadImages completed in 693.564709ms
	W0116 03:13:17.791955 1011681 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17967-971255/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1: no such file or directory
	I0116 03:13:17.792040 1011681 ssh_runner.go:195] Run: crio config
	I0116 03:13:17.851037 1011681 cni.go:84] Creating CNI manager for ""
	I0116 03:13:17.851066 1011681 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 03:13:17.851109 1011681 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0116 03:13:17.851136 1011681 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.91 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-788237 NodeName:old-k8s-version-788237 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.91"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.91 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0116 03:13:17.851281 1011681 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.91
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-788237"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.91
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.91"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-788237
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.39.91:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0116 03:13:17.851355 1011681 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-788237 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.91
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-788237 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0116 03:13:17.851419 1011681 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0116 03:13:17.861305 1011681 binaries.go:44] Found k8s binaries, skipping transfer
	I0116 03:13:17.861416 1011681 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0116 03:13:17.871242 1011681 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0116 03:13:17.891002 1011681 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0116 03:13:17.908934 1011681 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2177 bytes)
	I0116 03:13:17.928274 1011681 ssh_runner.go:195] Run: grep 192.168.39.91	control-plane.minikube.internal$ /etc/hosts
	I0116 03:13:17.932258 1011681 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.91	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 03:13:17.947070 1011681 certs.go:56] Setting up /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/old-k8s-version-788237 for IP: 192.168.39.91
	I0116 03:13:17.947119 1011681 certs.go:190] acquiring lock for shared ca certs: {Name:mk7c5920652340a030ac1e36e0fe21cc8437c491 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:13:17.947316 1011681 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17967-971255/.minikube/ca.key
	I0116 03:13:17.947374 1011681 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17967-971255/.minikube/proxy-client-ca.key
	I0116 03:13:17.947476 1011681 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/old-k8s-version-788237/client.key
	I0116 03:13:18.133447 1011681 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/old-k8s-version-788237/apiserver.key.d2754551
	I0116 03:13:18.133566 1011681 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/old-k8s-version-788237/proxy-client.key
	I0116 03:13:18.133765 1011681 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/home/jenkins/minikube-integration/17967-971255/.minikube/certs/978482.pem (1338 bytes)
	W0116 03:13:18.133860 1011681 certs.go:433] ignoring /home/jenkins/minikube-integration/17967-971255/.minikube/certs/home/jenkins/minikube-integration/17967-971255/.minikube/certs/978482_empty.pem, impossibly tiny 0 bytes
	I0116 03:13:18.133884 1011681 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca-key.pem (1675 bytes)
	I0116 03:13:18.133951 1011681 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca.pem (1082 bytes)
	I0116 03:13:18.133988 1011681 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/home/jenkins/minikube-integration/17967-971255/.minikube/certs/cert.pem (1123 bytes)
	I0116 03:13:18.134018 1011681 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/home/jenkins/minikube-integration/17967-971255/.minikube/certs/key.pem (1675 bytes)
	I0116 03:13:18.134075 1011681 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-971255/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17967-971255/.minikube/files/etc/ssl/certs/9784822.pem (1708 bytes)
	I0116 03:13:18.135047 1011681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/old-k8s-version-788237/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0116 03:13:18.169653 1011681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/old-k8s-version-788237/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0116 03:13:18.203412 1011681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/old-k8s-version-788237/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0116 03:13:18.232247 1011681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/old-k8s-version-788237/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0116 03:13:18.264379 1011681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0116 03:13:18.293926 1011681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0116 03:13:18.320373 1011681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0116 03:13:18.345098 1011681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0116 03:13:18.375186 1011681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/certs/978482.pem --> /usr/share/ca-certificates/978482.pem (1338 bytes)
	I0116 03:13:18.400408 1011681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/files/etc/ssl/certs/9784822.pem --> /usr/share/ca-certificates/9784822.pem (1708 bytes)
	I0116 03:13:18.426138 1011681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0116 03:13:18.451943 1011681 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0116 03:13:18.470682 1011681 ssh_runner.go:195] Run: openssl version
	I0116 03:13:18.477291 1011681 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/978482.pem && ln -fs /usr/share/ca-certificates/978482.pem /etc/ssl/certs/978482.pem"
	I0116 03:13:18.487687 1011681 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/978482.pem
	I0116 03:13:18.492346 1011681 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 16 02:10 /usr/share/ca-certificates/978482.pem
	I0116 03:13:18.492438 1011681 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/978482.pem
	I0116 03:13:18.498376 1011681 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/978482.pem /etc/ssl/certs/51391683.0"
	I0116 03:13:18.509157 1011681 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9784822.pem && ln -fs /usr/share/ca-certificates/9784822.pem /etc/ssl/certs/9784822.pem"
	I0116 03:13:18.520433 1011681 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9784822.pem
	I0116 03:13:18.525633 1011681 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 16 02:10 /usr/share/ca-certificates/9784822.pem
	I0116 03:13:18.525708 1011681 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9784822.pem
	I0116 03:13:18.531567 1011681 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9784822.pem /etc/ssl/certs/3ec20f2e.0"
	I0116 03:13:18.542827 1011681 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0116 03:13:18.553440 1011681 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:13:18.558572 1011681 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 16 02:01 /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:13:18.558647 1011681 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:13:18.564459 1011681 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0116 03:13:18.575413 1011681 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0116 03:13:18.580317 1011681 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0116 03:13:18.589623 1011681 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0116 03:13:18.598327 1011681 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0116 03:13:18.604540 1011681 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0116 03:13:18.610538 1011681 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0116 03:13:18.616482 1011681 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0116 03:13:18.622438 1011681 kubeadm.go:404] StartCluster: {Name:old-k8s-version-788237 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-788237 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.91 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 03:13:18.622565 1011681 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0116 03:13:18.622638 1011681 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 03:13:18.662697 1011681 cri.go:89] found id: ""
	I0116 03:13:18.662794 1011681 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0116 03:13:18.673299 1011681 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0116 03:13:18.673328 1011681 kubeadm.go:636] restartCluster start
	I0116 03:13:18.673404 1011681 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0116 03:13:18.683191 1011681 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:18.684893 1011681 kubeconfig.go:92] found "old-k8s-version-788237" server: "https://192.168.39.91:8443"
	I0116 03:13:18.688339 1011681 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0116 03:13:18.699684 1011681 api_server.go:166] Checking apiserver status ...
	I0116 03:13:18.699763 1011681 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:18.714966 1011681 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:19.200230 1011681 api_server.go:166] Checking apiserver status ...
	I0116 03:13:19.200346 1011681 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:19.216711 1011681 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:19.699865 1011681 api_server.go:166] Checking apiserver status ...
	I0116 03:13:19.699968 1011681 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:19.717864 1011681 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:20.200734 1011681 api_server.go:166] Checking apiserver status ...
	I0116 03:13:20.200839 1011681 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:13:16.049953 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | domain default-k8s-diff-port-775571 has defined MAC address 52:54:00:4b:bc:45 in network mk-default-k8s-diff-port-775571
	I0116 03:13:16.050440 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | unable to find current IP address of domain default-k8s-diff-port-775571 in network mk-default-k8s-diff-port-775571
	I0116 03:13:16.050486 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | I0116 03:13:16.050405 1012757 retry.go:31] will retry after 1.677720431s: waiting for machine to come up
	I0116 03:13:17.729520 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | domain default-k8s-diff-port-775571 has defined MAC address 52:54:00:4b:bc:45 in network mk-default-k8s-diff-port-775571
	I0116 03:13:17.730062 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | unable to find current IP address of domain default-k8s-diff-port-775571 in network mk-default-k8s-diff-port-775571
	I0116 03:13:17.730098 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | I0116 03:13:17.729997 1012757 retry.go:31] will retry after 1.686395601s: waiting for machine to come up
	I0116 03:13:19.419165 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | domain default-k8s-diff-port-775571 has defined MAC address 52:54:00:4b:bc:45 in network mk-default-k8s-diff-port-775571
	I0116 03:13:19.419699 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | unable to find current IP address of domain default-k8s-diff-port-775571 in network mk-default-k8s-diff-port-775571
	I0116 03:13:19.419741 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | I0116 03:13:19.419628 1012757 retry.go:31] will retry after 2.679023059s: waiting for machine to come up
	I0116 03:13:18.844795 1011501 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0116 03:13:18.861175 1011501 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0116 03:13:18.964890 1011501 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0116 03:13:18.862657 1011501 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 03:13:19.005912 1011501 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0116 03:13:19.005941 1011501 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0116 03:13:19.047693 1011501 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0116 03:13:19.047734 1011501 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0116 03:13:19.101576 1011501 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0116 03:13:19.940514 1011501 main.go:141] libmachine: Making call to close driver server
	I0116 03:13:19.940549 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .Close
	I0116 03:13:19.940914 1011501 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:13:19.940941 1011501 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:13:19.940954 1011501 main.go:141] libmachine: Making call to close driver server
	I0116 03:13:19.940965 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .Close
	I0116 03:13:19.941288 1011501 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:13:19.941309 1011501 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:13:19.986987 1011501 main.go:141] libmachine: Making call to close driver server
	I0116 03:13:19.987020 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .Close
	I0116 03:13:19.987375 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | Closing plugin on server side
	I0116 03:13:19.989349 1011501 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:13:19.989375 1011501 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:13:20.550836 1011501 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.449206565s)
	I0116 03:13:20.550903 1011501 main.go:141] libmachine: Making call to close driver server
	I0116 03:13:20.550921 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .Close
	I0116 03:13:20.550961 1011501 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.585981109s)
	I0116 03:13:20.551004 1011501 main.go:141] libmachine: Making call to close driver server
	I0116 03:13:20.551020 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .Close
	I0116 03:13:20.551499 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | Closing plugin on server side
	I0116 03:13:20.551509 1011501 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:13:20.551519 1011501 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:13:20.551564 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | Closing plugin on server side
	I0116 03:13:20.551565 1011501 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:13:20.551604 1011501 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:13:20.551624 1011501 main.go:141] libmachine: Making call to close driver server
	I0116 03:13:20.551610 1011501 main.go:141] libmachine: Making call to close driver server
	I0116 03:13:20.551637 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .Close
	I0116 03:13:20.551654 1011501 main.go:141] libmachine: (embed-certs-480663) Calling .Close
	I0116 03:13:20.551899 1011501 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:13:20.551918 1011501 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:13:20.551975 1011501 main.go:141] libmachine: (embed-certs-480663) DBG | Closing plugin on server side
	I0116 03:13:20.552009 1011501 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:13:20.552027 1011501 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:13:20.552050 1011501 addons.go:470] Verifying addon metrics-server=true in "embed-certs-480663"
	I0116 03:13:20.555953 1011501 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0116 03:13:20.557383 1011501 addons.go:505] enable addons completed in 2.42368035s: enabled=[default-storageclass storage-provisioner metrics-server]
	I0116 03:13:20.756003 1011501 node_ready.go:58] node "embed-certs-480663" has status "Ready":"False"
	I0116 03:13:23.254943 1011501 node_ready.go:58] node "embed-certs-480663" has status "Ready":"False"
	W0116 03:13:20.218633 1011681 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:20.700343 1011681 api_server.go:166] Checking apiserver status ...
	I0116 03:13:20.700461 1011681 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:20.713613 1011681 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:21.200115 1011681 api_server.go:166] Checking apiserver status ...
	I0116 03:13:21.200232 1011681 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:21.214341 1011681 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:21.700520 1011681 api_server.go:166] Checking apiserver status ...
	I0116 03:13:21.700644 1011681 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:21.717190 1011681 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:22.200709 1011681 api_server.go:166] Checking apiserver status ...
	I0116 03:13:22.200870 1011681 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:22.217321 1011681 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:22.699859 1011681 api_server.go:166] Checking apiserver status ...
	I0116 03:13:22.699972 1011681 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:22.717201 1011681 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:23.200594 1011681 api_server.go:166] Checking apiserver status ...
	I0116 03:13:23.200713 1011681 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:23.217126 1011681 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:23.700769 1011681 api_server.go:166] Checking apiserver status ...
	I0116 03:13:23.700891 1011681 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:23.715639 1011681 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:24.200713 1011681 api_server.go:166] Checking apiserver status ...
	I0116 03:13:24.200800 1011681 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:24.216368 1011681 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:24.699816 1011681 api_server.go:166] Checking apiserver status ...
	I0116 03:13:24.699958 1011681 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:24.717041 1011681 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:25.200575 1011681 api_server.go:166] Checking apiserver status ...
	I0116 03:13:25.200673 1011681 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:13:22.100823 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | domain default-k8s-diff-port-775571 has defined MAC address 52:54:00:4b:bc:45 in network mk-default-k8s-diff-port-775571
	I0116 03:13:22.101280 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | unable to find current IP address of domain default-k8s-diff-port-775571 in network mk-default-k8s-diff-port-775571
	I0116 03:13:22.101336 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | I0116 03:13:22.101245 1012757 retry.go:31] will retry after 3.352897115s: waiting for machine to come up
	I0116 03:13:25.456363 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | domain default-k8s-diff-port-775571 has defined MAC address 52:54:00:4b:bc:45 in network mk-default-k8s-diff-port-775571
	I0116 03:13:25.456824 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | unable to find current IP address of domain default-k8s-diff-port-775571 in network mk-default-k8s-diff-port-775571
	I0116 03:13:25.456908 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | I0116 03:13:25.456819 1012757 retry.go:31] will retry after 4.541436356s: waiting for machine to come up
	I0116 03:13:24.754870 1011501 node_ready.go:49] node "embed-certs-480663" has status "Ready":"True"
	I0116 03:13:24.754900 1011501 node_ready.go:38] duration metric: took 6.00460635s waiting for node "embed-certs-480663" to be "Ready" ...
	I0116 03:13:24.754913 1011501 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:13:24.761593 1011501 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-stqh5" in "kube-system" namespace to be "Ready" ...
	I0116 03:13:24.769366 1011501 pod_ready.go:92] pod "coredns-5dd5756b68-stqh5" in "kube-system" namespace has status "Ready":"True"
	I0116 03:13:24.769394 1011501 pod_ready.go:81] duration metric: took 7.773298ms waiting for pod "coredns-5dd5756b68-stqh5" in "kube-system" namespace to be "Ready" ...
	I0116 03:13:24.769407 1011501 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-480663" in "kube-system" namespace to be "Ready" ...
	I0116 03:13:26.782066 1011501 pod_ready.go:92] pod "etcd-embed-certs-480663" in "kube-system" namespace has status "Ready":"True"
	I0116 03:13:26.782105 1011501 pod_ready.go:81] duration metric: took 2.012689692s waiting for pod "etcd-embed-certs-480663" in "kube-system" namespace to be "Ready" ...
	I0116 03:13:26.782119 1011501 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-480663" in "kube-system" namespace to be "Ready" ...
	I0116 03:13:26.792641 1011501 pod_ready.go:92] pod "kube-apiserver-embed-certs-480663" in "kube-system" namespace has status "Ready":"True"
	I0116 03:13:26.792674 1011501 pod_ready.go:81] duration metric: took 10.545313ms waiting for pod "kube-apiserver-embed-certs-480663" in "kube-system" namespace to be "Ready" ...
	I0116 03:13:26.792690 1011501 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-480663" in "kube-system" namespace to be "Ready" ...
	I0116 03:13:26.799734 1011501 pod_ready.go:92] pod "kube-controller-manager-embed-certs-480663" in "kube-system" namespace has status "Ready":"True"
	I0116 03:13:26.799756 1011501 pod_ready.go:81] duration metric: took 7.056918ms waiting for pod "kube-controller-manager-embed-certs-480663" in "kube-system" namespace to be "Ready" ...
	I0116 03:13:26.799765 1011501 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-j4786" in "kube-system" namespace to be "Ready" ...
	I0116 03:13:26.804888 1011501 pod_ready.go:92] pod "kube-proxy-j4786" in "kube-system" namespace has status "Ready":"True"
	I0116 03:13:26.804924 1011501 pod_ready.go:81] duration metric: took 5.151602ms waiting for pod "kube-proxy-j4786" in "kube-system" namespace to be "Ready" ...
	I0116 03:13:26.804937 1011501 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-480663" in "kube-system" namespace to be "Ready" ...
	I0116 03:13:27.954848 1011501 pod_ready.go:92] pod "kube-scheduler-embed-certs-480663" in "kube-system" namespace has status "Ready":"True"
	I0116 03:13:27.954889 1011501 pod_ready.go:81] duration metric: took 1.149940262s waiting for pod "kube-scheduler-embed-certs-480663" in "kube-system" namespace to be "Ready" ...
	I0116 03:13:27.954904 1011501 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace to be "Ready" ...
	W0116 03:13:25.214882 1011681 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:25.700375 1011681 api_server.go:166] Checking apiserver status ...
	I0116 03:13:25.700473 1011681 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:25.713971 1011681 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:26.200077 1011681 api_server.go:166] Checking apiserver status ...
	I0116 03:13:26.200184 1011681 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:26.212440 1011681 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:26.699761 1011681 api_server.go:166] Checking apiserver status ...
	I0116 03:13:26.699855 1011681 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:26.713769 1011681 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:27.200383 1011681 api_server.go:166] Checking apiserver status ...
	I0116 03:13:27.200476 1011681 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:27.212354 1011681 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:27.699854 1011681 api_server.go:166] Checking apiserver status ...
	I0116 03:13:27.699946 1011681 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:27.712542 1011681 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:28.200037 1011681 api_server.go:166] Checking apiserver status ...
	I0116 03:13:28.200144 1011681 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:28.212556 1011681 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:28.700313 1011681 api_server.go:166] Checking apiserver status ...
	I0116 03:13:28.700415 1011681 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:28.712681 1011681 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:28.712718 1011681 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0116 03:13:28.712759 1011681 kubeadm.go:1135] stopping kube-system containers ...
	I0116 03:13:28.712773 1011681 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0116 03:13:28.712840 1011681 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 03:13:28.764021 1011681 cri.go:89] found id: ""
	I0116 03:13:28.764122 1011681 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0116 03:13:28.780410 1011681 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0116 03:13:28.790517 1011681 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0116 03:13:28.790617 1011681 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0116 03:13:28.800491 1011681 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0116 03:13:28.800544 1011681 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:13:28.935606 1011681 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:13:29.805004 1011681 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:13:30.030241 1011681 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:13:30.123106 1011681 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:13:30.003874 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | domain default-k8s-diff-port-775571 has defined MAC address 52:54:00:4b:bc:45 in network mk-default-k8s-diff-port-775571
	I0116 03:13:30.004370 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Found IP for machine: 192.168.72.158
	I0116 03:13:30.004394 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Reserving static IP address...
	I0116 03:13:30.004424 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | domain default-k8s-diff-port-775571 has current primary IP address 192.168.72.158 and MAC address 52:54:00:4b:bc:45 in network mk-default-k8s-diff-port-775571
	I0116 03:13:30.004824 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-775571", mac: "52:54:00:4b:bc:45", ip: "192.168.72.158"} in network mk-default-k8s-diff-port-775571: {Iface:virbr4 ExpiryTime:2024-01-16 04:13:23 +0000 UTC Type:0 Mac:52:54:00:4b:bc:45 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-775571 Clientid:01:52:54:00:4b:bc:45}
	I0116 03:13:30.004853 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | skip adding static IP to network mk-default-k8s-diff-port-775571 - found existing host DHCP lease matching {name: "default-k8s-diff-port-775571", mac: "52:54:00:4b:bc:45", ip: "192.168.72.158"}
	I0116 03:13:30.004868 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Reserved static IP address: 192.168.72.158
	I0116 03:13:30.004888 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Waiting for SSH to be available...
	I0116 03:13:30.004901 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | Getting to WaitForSSH function...
	I0116 03:13:30.007176 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | domain default-k8s-diff-port-775571 has defined MAC address 52:54:00:4b:bc:45 in network mk-default-k8s-diff-port-775571
	I0116 03:13:30.007549 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:bc:45", ip: ""} in network mk-default-k8s-diff-port-775571: {Iface:virbr4 ExpiryTime:2024-01-16 04:13:23 +0000 UTC Type:0 Mac:52:54:00:4b:bc:45 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-775571 Clientid:01:52:54:00:4b:bc:45}
	I0116 03:13:30.007592 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | domain default-k8s-diff-port-775571 has defined IP address 192.168.72.158 and MAC address 52:54:00:4b:bc:45 in network mk-default-k8s-diff-port-775571
	I0116 03:13:30.007722 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | Using SSH client type: external
	I0116 03:13:30.007752 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | Using SSH private key: /home/jenkins/minikube-integration/17967-971255/.minikube/machines/default-k8s-diff-port-775571/id_rsa (-rw-------)
	I0116 03:13:30.007791 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.158 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17967-971255/.minikube/machines/default-k8s-diff-port-775571/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0116 03:13:30.007807 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | About to run SSH command:
	I0116 03:13:30.007822 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | exit 0
	I0116 03:13:30.105862 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | SSH cmd err, output: <nil>: 
	I0116 03:13:30.106241 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetConfigRaw
	I0116 03:13:30.107063 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetIP
	I0116 03:13:30.110265 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | domain default-k8s-diff-port-775571 has defined MAC address 52:54:00:4b:bc:45 in network mk-default-k8s-diff-port-775571
	I0116 03:13:30.110754 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:bc:45", ip: ""} in network mk-default-k8s-diff-port-775571: {Iface:virbr4 ExpiryTime:2024-01-16 04:13:23 +0000 UTC Type:0 Mac:52:54:00:4b:bc:45 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-775571 Clientid:01:52:54:00:4b:bc:45}
	I0116 03:13:30.110788 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | domain default-k8s-diff-port-775571 has defined IP address 192.168.72.158 and MAC address 52:54:00:4b:bc:45 in network mk-default-k8s-diff-port-775571
	I0116 03:13:30.111070 1011955 profile.go:148] Saving config to /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/default-k8s-diff-port-775571/config.json ...
	I0116 03:13:30.111270 1011955 machine.go:88] provisioning docker machine ...
	I0116 03:13:30.111289 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .DriverName
	I0116 03:13:30.111511 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetMachineName
	I0116 03:13:30.111751 1011955 buildroot.go:166] provisioning hostname "default-k8s-diff-port-775571"
	I0116 03:13:30.111781 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetMachineName
	I0116 03:13:30.111987 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetSSHHostname
	I0116 03:13:30.114629 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | domain default-k8s-diff-port-775571 has defined MAC address 52:54:00:4b:bc:45 in network mk-default-k8s-diff-port-775571
	I0116 03:13:30.115002 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:bc:45", ip: ""} in network mk-default-k8s-diff-port-775571: {Iface:virbr4 ExpiryTime:2024-01-16 04:13:23 +0000 UTC Type:0 Mac:52:54:00:4b:bc:45 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-775571 Clientid:01:52:54:00:4b:bc:45}
	I0116 03:13:30.115032 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | domain default-k8s-diff-port-775571 has defined IP address 192.168.72.158 and MAC address 52:54:00:4b:bc:45 in network mk-default-k8s-diff-port-775571
	I0116 03:13:30.115205 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetSSHPort
	I0116 03:13:30.115375 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetSSHKeyPath
	I0116 03:13:30.115551 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetSSHKeyPath
	I0116 03:13:30.115706 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetSSHUsername
	I0116 03:13:30.115886 1011955 main.go:141] libmachine: Using SSH client type: native
	I0116 03:13:30.116340 1011955 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.72.158 22 <nil> <nil>}
	I0116 03:13:30.116363 1011955 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-775571 && echo "default-k8s-diff-port-775571" | sudo tee /etc/hostname
	I0116 03:13:30.260423 1011955 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-775571
	
	I0116 03:13:30.260451 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetSSHHostname
	I0116 03:13:30.263641 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | domain default-k8s-diff-port-775571 has defined MAC address 52:54:00:4b:bc:45 in network mk-default-k8s-diff-port-775571
	I0116 03:13:30.264075 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:bc:45", ip: ""} in network mk-default-k8s-diff-port-775571: {Iface:virbr4 ExpiryTime:2024-01-16 04:13:23 +0000 UTC Type:0 Mac:52:54:00:4b:bc:45 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-775571 Clientid:01:52:54:00:4b:bc:45}
	I0116 03:13:30.264117 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | domain default-k8s-diff-port-775571 has defined IP address 192.168.72.158 and MAC address 52:54:00:4b:bc:45 in network mk-default-k8s-diff-port-775571
	I0116 03:13:30.264539 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetSSHPort
	I0116 03:13:30.264776 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetSSHKeyPath
	I0116 03:13:30.264987 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetSSHKeyPath
	I0116 03:13:30.265162 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetSSHUsername
	I0116 03:13:30.265379 1011955 main.go:141] libmachine: Using SSH client type: native
	I0116 03:13:30.265894 1011955 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.72.158 22 <nil> <nil>}
	I0116 03:13:30.265929 1011955 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-775571' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-775571/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-775571' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0116 03:13:30.404028 1011955 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0116 03:13:30.404070 1011955 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17967-971255/.minikube CaCertPath:/home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17967-971255/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17967-971255/.minikube}
	I0116 03:13:30.404131 1011955 buildroot.go:174] setting up certificates
	I0116 03:13:30.404147 1011955 provision.go:83] configureAuth start
	I0116 03:13:30.404167 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetMachineName
	I0116 03:13:30.404539 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetIP
	I0116 03:13:30.407588 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | domain default-k8s-diff-port-775571 has defined MAC address 52:54:00:4b:bc:45 in network mk-default-k8s-diff-port-775571
	I0116 03:13:30.408002 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:bc:45", ip: ""} in network mk-default-k8s-diff-port-775571: {Iface:virbr4 ExpiryTime:2024-01-16 04:13:23 +0000 UTC Type:0 Mac:52:54:00:4b:bc:45 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-775571 Clientid:01:52:54:00:4b:bc:45}
	I0116 03:13:30.408036 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | domain default-k8s-diff-port-775571 has defined IP address 192.168.72.158 and MAC address 52:54:00:4b:bc:45 in network mk-default-k8s-diff-port-775571
	I0116 03:13:30.408229 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetSSHHostname
	I0116 03:13:30.410911 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | domain default-k8s-diff-port-775571 has defined MAC address 52:54:00:4b:bc:45 in network mk-default-k8s-diff-port-775571
	I0116 03:13:30.411309 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:bc:45", ip: ""} in network mk-default-k8s-diff-port-775571: {Iface:virbr4 ExpiryTime:2024-01-16 04:13:23 +0000 UTC Type:0 Mac:52:54:00:4b:bc:45 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-775571 Clientid:01:52:54:00:4b:bc:45}
	I0116 03:13:30.411362 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | domain default-k8s-diff-port-775571 has defined IP address 192.168.72.158 and MAC address 52:54:00:4b:bc:45 in network mk-default-k8s-diff-port-775571
	I0116 03:13:30.411463 1011955 provision.go:138] copyHostCerts
	I0116 03:13:30.411550 1011955 exec_runner.go:144] found /home/jenkins/minikube-integration/17967-971255/.minikube/ca.pem, removing ...
	I0116 03:13:30.411564 1011955 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17967-971255/.minikube/ca.pem
	I0116 03:13:30.411637 1011955 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17967-971255/.minikube/ca.pem (1082 bytes)
	I0116 03:13:30.411760 1011955 exec_runner.go:144] found /home/jenkins/minikube-integration/17967-971255/.minikube/cert.pem, removing ...
	I0116 03:13:30.411768 1011955 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17967-971255/.minikube/cert.pem
	I0116 03:13:30.411800 1011955 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17967-971255/.minikube/cert.pem (1123 bytes)
	I0116 03:13:30.411878 1011955 exec_runner.go:144] found /home/jenkins/minikube-integration/17967-971255/.minikube/key.pem, removing ...
	I0116 03:13:30.411891 1011955 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17967-971255/.minikube/key.pem
	I0116 03:13:30.411920 1011955 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17967-971255/.minikube/key.pem (1675 bytes)
	I0116 03:13:30.411983 1011955 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17967-971255/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-775571 san=[192.168.72.158 192.168.72.158 localhost 127.0.0.1 minikube default-k8s-diff-port-775571]
	I0116 03:13:30.478444 1011955 provision.go:172] copyRemoteCerts
	I0116 03:13:30.478520 1011955 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0116 03:13:30.478551 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetSSHHostname
	I0116 03:13:30.481824 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | domain default-k8s-diff-port-775571 has defined MAC address 52:54:00:4b:bc:45 in network mk-default-k8s-diff-port-775571
	I0116 03:13:30.482200 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:bc:45", ip: ""} in network mk-default-k8s-diff-port-775571: {Iface:virbr4 ExpiryTime:2024-01-16 04:13:23 +0000 UTC Type:0 Mac:52:54:00:4b:bc:45 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-775571 Clientid:01:52:54:00:4b:bc:45}
	I0116 03:13:30.482239 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | domain default-k8s-diff-port-775571 has defined IP address 192.168.72.158 and MAC address 52:54:00:4b:bc:45 in network mk-default-k8s-diff-port-775571
	I0116 03:13:30.482469 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetSSHPort
	I0116 03:13:30.482663 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetSSHKeyPath
	I0116 03:13:30.482870 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetSSHUsername
	I0116 03:13:30.483070 1011955 sshutil.go:53] new ssh client: &{IP:192.168.72.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/default-k8s-diff-port-775571/id_rsa Username:docker}
	I0116 03:13:31.280327 1011460 start.go:369] acquired machines lock for "no-preload-934668" in 56.48409901s
	I0116 03:13:31.280456 1011460 start.go:96] Skipping create...Using existing machine configuration
	I0116 03:13:31.280473 1011460 fix.go:54] fixHost starting: 
	I0116 03:13:31.280948 1011460 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:13:31.280986 1011460 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:13:31.302076 1011460 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39289
	I0116 03:13:31.302631 1011460 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:13:31.303270 1011460 main.go:141] libmachine: Using API Version  1
	I0116 03:13:31.303299 1011460 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:13:31.303700 1011460 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:13:31.304127 1011460 main.go:141] libmachine: (no-preload-934668) Calling .DriverName
	I0116 03:13:31.304681 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetState
	I0116 03:13:31.307845 1011460 fix.go:102] recreateIfNeeded on no-preload-934668: state=Stopped err=<nil>
	I0116 03:13:31.307882 1011460 main.go:141] libmachine: (no-preload-934668) Calling .DriverName
	W0116 03:13:31.308092 1011460 fix.go:128] unexpected machine state, will restart: <nil>
	I0116 03:13:31.310208 1011460 out.go:177] * Restarting existing kvm2 VM for "no-preload-934668" ...
	I0116 03:13:31.311591 1011460 main.go:141] libmachine: (no-preload-934668) Calling .Start
	I0116 03:13:31.311829 1011460 main.go:141] libmachine: (no-preload-934668) Ensuring networks are active...
	I0116 03:13:31.312840 1011460 main.go:141] libmachine: (no-preload-934668) Ensuring network default is active
	I0116 03:13:31.313302 1011460 main.go:141] libmachine: (no-preload-934668) Ensuring network mk-no-preload-934668 is active
	I0116 03:13:31.313756 1011460 main.go:141] libmachine: (no-preload-934668) Getting domain xml...
	I0116 03:13:31.314627 1011460 main.go:141] libmachine: (no-preload-934668) Creating domain...
	I0116 03:13:30.580435 1011955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0116 03:13:30.604188 1011955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0116 03:13:30.627877 1011955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0116 03:13:30.651737 1011955 provision.go:86] duration metric: configureAuth took 247.572907ms
	I0116 03:13:30.651768 1011955 buildroot.go:189] setting minikube options for container-runtime
	I0116 03:13:30.651949 1011955 config.go:182] Loaded profile config "default-k8s-diff-port-775571": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 03:13:30.652040 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetSSHHostname
	I0116 03:13:30.654855 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | domain default-k8s-diff-port-775571 has defined MAC address 52:54:00:4b:bc:45 in network mk-default-k8s-diff-port-775571
	I0116 03:13:30.655180 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:bc:45", ip: ""} in network mk-default-k8s-diff-port-775571: {Iface:virbr4 ExpiryTime:2024-01-16 04:13:23 +0000 UTC Type:0 Mac:52:54:00:4b:bc:45 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-775571 Clientid:01:52:54:00:4b:bc:45}
	I0116 03:13:30.655224 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | domain default-k8s-diff-port-775571 has defined IP address 192.168.72.158 and MAC address 52:54:00:4b:bc:45 in network mk-default-k8s-diff-port-775571
	I0116 03:13:30.655395 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetSSHPort
	I0116 03:13:30.655676 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetSSHKeyPath
	I0116 03:13:30.655874 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetSSHKeyPath
	I0116 03:13:30.656047 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetSSHUsername
	I0116 03:13:30.656231 1011955 main.go:141] libmachine: Using SSH client type: native
	I0116 03:13:30.656542 1011955 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.72.158 22 <nil> <nil>}
	I0116 03:13:30.656562 1011955 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0116 03:13:30.996593 1011955 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0116 03:13:30.996632 1011955 machine.go:91] provisioned docker machine in 885.348285ms
	I0116 03:13:30.996650 1011955 start.go:300] post-start starting for "default-k8s-diff-port-775571" (driver="kvm2")
	I0116 03:13:30.996669 1011955 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0116 03:13:30.996697 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .DriverName
	I0116 03:13:30.997187 1011955 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0116 03:13:30.997222 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetSSHHostname
	I0116 03:13:31.000071 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | domain default-k8s-diff-port-775571 has defined MAC address 52:54:00:4b:bc:45 in network mk-default-k8s-diff-port-775571
	I0116 03:13:31.000460 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:bc:45", ip: ""} in network mk-default-k8s-diff-port-775571: {Iface:virbr4 ExpiryTime:2024-01-16 04:13:23 +0000 UTC Type:0 Mac:52:54:00:4b:bc:45 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-775571 Clientid:01:52:54:00:4b:bc:45}
	I0116 03:13:31.000498 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | domain default-k8s-diff-port-775571 has defined IP address 192.168.72.158 and MAC address 52:54:00:4b:bc:45 in network mk-default-k8s-diff-port-775571
	I0116 03:13:31.000666 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetSSHPort
	I0116 03:13:31.000867 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetSSHKeyPath
	I0116 03:13:31.001030 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetSSHUsername
	I0116 03:13:31.001215 1011955 sshutil.go:53] new ssh client: &{IP:192.168.72.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/default-k8s-diff-port-775571/id_rsa Username:docker}
	I0116 03:13:31.102897 1011955 ssh_runner.go:195] Run: cat /etc/os-release
	I0116 03:13:31.107910 1011955 info.go:137] Remote host: Buildroot 2021.02.12
	I0116 03:13:31.107939 1011955 filesync.go:126] Scanning /home/jenkins/minikube-integration/17967-971255/.minikube/addons for local assets ...
	I0116 03:13:31.108003 1011955 filesync.go:126] Scanning /home/jenkins/minikube-integration/17967-971255/.minikube/files for local assets ...
	I0116 03:13:31.108076 1011955 filesync.go:149] local asset: /home/jenkins/minikube-integration/17967-971255/.minikube/files/etc/ssl/certs/9784822.pem -> 9784822.pem in /etc/ssl/certs
	I0116 03:13:31.108165 1011955 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0116 03:13:31.118591 1011955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/files/etc/ssl/certs/9784822.pem --> /etc/ssl/certs/9784822.pem (1708 bytes)
	I0116 03:13:31.144536 1011955 start.go:303] post-start completed in 147.864906ms
	I0116 03:13:31.144581 1011955 fix.go:56] fixHost completed within 21.109302207s
	I0116 03:13:31.144609 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetSSHHostname
	I0116 03:13:31.147887 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | domain default-k8s-diff-port-775571 has defined MAC address 52:54:00:4b:bc:45 in network mk-default-k8s-diff-port-775571
	I0116 03:13:31.148261 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:bc:45", ip: ""} in network mk-default-k8s-diff-port-775571: {Iface:virbr4 ExpiryTime:2024-01-16 04:13:23 +0000 UTC Type:0 Mac:52:54:00:4b:bc:45 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-775571 Clientid:01:52:54:00:4b:bc:45}
	I0116 03:13:31.148300 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | domain default-k8s-diff-port-775571 has defined IP address 192.168.72.158 and MAC address 52:54:00:4b:bc:45 in network mk-default-k8s-diff-port-775571
	I0116 03:13:31.148487 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetSSHPort
	I0116 03:13:31.148765 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetSSHKeyPath
	I0116 03:13:31.148980 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetSSHKeyPath
	I0116 03:13:31.149195 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetSSHUsername
	I0116 03:13:31.149426 1011955 main.go:141] libmachine: Using SSH client type: native
	I0116 03:13:31.149818 1011955 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.72.158 22 <nil> <nil>}
	I0116 03:13:31.149838 1011955 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0116 03:13:31.280175 1011955 main.go:141] libmachine: SSH cmd err, output: <nil>: 1705374811.251760286
	
	I0116 03:13:31.280203 1011955 fix.go:206] guest clock: 1705374811.251760286
	I0116 03:13:31.280210 1011955 fix.go:219] Guest: 2024-01-16 03:13:31.251760286 +0000 UTC Remote: 2024-01-16 03:13:31.144586974 +0000 UTC m=+275.673207404 (delta=107.173312ms)
	I0116 03:13:31.280231 1011955 fix.go:190] guest clock delta is within tolerance: 107.173312ms
	I0116 03:13:31.280242 1011955 start.go:83] releasing machines lock for "default-k8s-diff-port-775571", held for 21.244993059s
	I0116 03:13:31.280274 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .DriverName
	I0116 03:13:31.280606 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetIP
	I0116 03:13:31.284082 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | domain default-k8s-diff-port-775571 has defined MAC address 52:54:00:4b:bc:45 in network mk-default-k8s-diff-port-775571
	I0116 03:13:31.284580 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:bc:45", ip: ""} in network mk-default-k8s-diff-port-775571: {Iface:virbr4 ExpiryTime:2024-01-16 04:13:23 +0000 UTC Type:0 Mac:52:54:00:4b:bc:45 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-775571 Clientid:01:52:54:00:4b:bc:45}
	I0116 03:13:31.284627 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | domain default-k8s-diff-port-775571 has defined IP address 192.168.72.158 and MAC address 52:54:00:4b:bc:45 in network mk-default-k8s-diff-port-775571
	I0116 03:13:31.284960 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .DriverName
	I0116 03:13:31.285552 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .DriverName
	I0116 03:13:31.285784 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .DriverName
	I0116 03:13:31.285894 1011955 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0116 03:13:31.285954 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetSSHHostname
	I0116 03:13:31.286062 1011955 ssh_runner.go:195] Run: cat /version.json
	I0116 03:13:31.286081 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetSSHHostname
	I0116 03:13:31.289112 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | domain default-k8s-diff-port-775571 has defined MAC address 52:54:00:4b:bc:45 in network mk-default-k8s-diff-port-775571
	I0116 03:13:31.289486 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | domain default-k8s-diff-port-775571 has defined MAC address 52:54:00:4b:bc:45 in network mk-default-k8s-diff-port-775571
	I0116 03:13:31.289541 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:bc:45", ip: ""} in network mk-default-k8s-diff-port-775571: {Iface:virbr4 ExpiryTime:2024-01-16 04:13:23 +0000 UTC Type:0 Mac:52:54:00:4b:bc:45 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-775571 Clientid:01:52:54:00:4b:bc:45}
	I0116 03:13:31.289565 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | domain default-k8s-diff-port-775571 has defined IP address 192.168.72.158 and MAC address 52:54:00:4b:bc:45 in network mk-default-k8s-diff-port-775571
	I0116 03:13:31.289700 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetSSHPort
	I0116 03:13:31.289942 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:bc:45", ip: ""} in network mk-default-k8s-diff-port-775571: {Iface:virbr4 ExpiryTime:2024-01-16 04:13:23 +0000 UTC Type:0 Mac:52:54:00:4b:bc:45 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-775571 Clientid:01:52:54:00:4b:bc:45}
	I0116 03:13:31.289959 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetSSHKeyPath
	I0116 03:13:31.289969 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | domain default-k8s-diff-port-775571 has defined IP address 192.168.72.158 and MAC address 52:54:00:4b:bc:45 in network mk-default-k8s-diff-port-775571
	I0116 03:13:31.290169 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetSSHUsername
	I0116 03:13:31.290251 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetSSHPort
	I0116 03:13:31.290334 1011955 sshutil.go:53] new ssh client: &{IP:192.168.72.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/default-k8s-diff-port-775571/id_rsa Username:docker}
	I0116 03:13:31.290487 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetSSHKeyPath
	I0116 03:13:31.290643 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetSSHUsername
	I0116 03:13:31.290787 1011955 sshutil.go:53] new ssh client: &{IP:192.168.72.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/default-k8s-diff-port-775571/id_rsa Username:docker}
	I0116 03:13:31.412666 1011955 ssh_runner.go:195] Run: systemctl --version
	I0116 03:13:31.420934 1011955 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0116 03:13:31.571465 1011955 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0116 03:13:31.580180 1011955 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0116 03:13:31.580312 1011955 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0116 03:13:31.601148 1011955 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0116 03:13:31.601187 1011955 start.go:475] detecting cgroup driver to use...
	I0116 03:13:31.601274 1011955 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0116 03:13:31.622197 1011955 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0116 03:13:31.637047 1011955 docker.go:217] disabling cri-docker service (if available) ...
	I0116 03:13:31.637146 1011955 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0116 03:13:31.655781 1011955 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0116 03:13:31.678925 1011955 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0116 03:13:31.827298 1011955 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0116 03:13:31.973784 1011955 docker.go:233] disabling docker service ...
	I0116 03:13:31.973890 1011955 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0116 03:13:32.003399 1011955 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0116 03:13:32.022537 1011955 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0116 03:13:32.201640 1011955 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0116 03:13:32.336251 1011955 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0116 03:13:32.352402 1011955 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0116 03:13:32.376724 1011955 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0116 03:13:32.376796 1011955 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:13:32.387636 1011955 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0116 03:13:32.387721 1011955 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:13:32.399288 1011955 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:13:32.411777 1011955 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:13:32.425137 1011955 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0116 03:13:32.438308 1011955 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0116 03:13:32.451165 1011955 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0116 03:13:32.451246 1011955 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0116 03:13:32.467922 1011955 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0116 03:13:32.479144 1011955 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0116 03:13:32.651975 1011955 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0116 03:13:32.857869 1011955 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0116 03:13:32.857953 1011955 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0116 03:13:32.863869 1011955 start.go:543] Will wait 60s for crictl version
	I0116 03:13:32.863957 1011955 ssh_runner.go:195] Run: which crictl
	I0116 03:13:32.868179 1011955 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0116 03:13:32.917020 1011955 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0116 03:13:32.917111 1011955 ssh_runner.go:195] Run: crio --version
	I0116 03:13:32.970563 1011955 ssh_runner.go:195] Run: crio --version
	I0116 03:13:33.027800 1011955 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0116 03:13:29.966940 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:13:32.466746 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:13:30.212501 1011681 api_server.go:52] waiting for apiserver process to appear ...
	I0116 03:13:30.212577 1011681 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:13:30.712756 1011681 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:13:31.212694 1011681 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:13:31.713596 1011681 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:13:32.212767 1011681 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:13:32.258055 1011681 api_server.go:72] duration metric: took 2.045552104s to wait for apiserver process to appear ...
	I0116 03:13:32.258091 1011681 api_server.go:88] waiting for apiserver healthz status ...
	I0116 03:13:32.258118 1011681 api_server.go:253] Checking apiserver healthz at https://192.168.39.91:8443/healthz ...
	I0116 03:13:32.258807 1011681 api_server.go:269] stopped: https://192.168.39.91:8443/healthz: Get "https://192.168.39.91:8443/healthz": dial tcp 192.168.39.91:8443: connect: connection refused
	I0116 03:13:32.758305 1011681 api_server.go:253] Checking apiserver healthz at https://192.168.39.91:8443/healthz ...
	I0116 03:13:33.029157 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetIP
	I0116 03:13:33.032430 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | domain default-k8s-diff-port-775571 has defined MAC address 52:54:00:4b:bc:45 in network mk-default-k8s-diff-port-775571
	I0116 03:13:33.032824 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:bc:45", ip: ""} in network mk-default-k8s-diff-port-775571: {Iface:virbr4 ExpiryTime:2024-01-16 04:13:23 +0000 UTC Type:0 Mac:52:54:00:4b:bc:45 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-775571 Clientid:01:52:54:00:4b:bc:45}
	I0116 03:13:33.032860 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | domain default-k8s-diff-port-775571 has defined IP address 192.168.72.158 and MAC address 52:54:00:4b:bc:45 in network mk-default-k8s-diff-port-775571
	I0116 03:13:33.033077 1011955 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0116 03:13:33.037500 1011955 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 03:13:33.050478 1011955 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0116 03:13:33.050573 1011955 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 03:13:33.096041 1011955 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0116 03:13:33.096133 1011955 ssh_runner.go:195] Run: which lz4
	I0116 03:13:33.100546 1011955 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0116 03:13:33.105198 1011955 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0116 03:13:33.105234 1011955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0116 03:13:35.104728 1011955 crio.go:444] Took 2.004229 seconds to copy over tarball
	I0116 03:13:35.104817 1011955 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0116 03:13:32.655911 1011460 main.go:141] libmachine: (no-preload-934668) Waiting to get IP...
	I0116 03:13:32.657029 1011460 main.go:141] libmachine: (no-preload-934668) DBG | domain no-preload-934668 has defined MAC address 52:54:00:96:89:86 in network mk-no-preload-934668
	I0116 03:13:32.657609 1011460 main.go:141] libmachine: (no-preload-934668) DBG | unable to find current IP address of domain no-preload-934668 in network mk-no-preload-934668
	I0116 03:13:32.657728 1011460 main.go:141] libmachine: (no-preload-934668) DBG | I0116 03:13:32.657598 1012976 retry.go:31] will retry after 271.069608ms: waiting for machine to come up
	I0116 03:13:32.930214 1011460 main.go:141] libmachine: (no-preload-934668) DBG | domain no-preload-934668 has defined MAC address 52:54:00:96:89:86 in network mk-no-preload-934668
	I0116 03:13:32.930725 1011460 main.go:141] libmachine: (no-preload-934668) DBG | unable to find current IP address of domain no-preload-934668 in network mk-no-preload-934668
	I0116 03:13:32.930856 1011460 main.go:141] libmachine: (no-preload-934668) DBG | I0116 03:13:32.930775 1012976 retry.go:31] will retry after 377.793601ms: waiting for machine to come up
	I0116 03:13:33.310351 1011460 main.go:141] libmachine: (no-preload-934668) DBG | domain no-preload-934668 has defined MAC address 52:54:00:96:89:86 in network mk-no-preload-934668
	I0116 03:13:33.310835 1011460 main.go:141] libmachine: (no-preload-934668) DBG | unable to find current IP address of domain no-preload-934668 in network mk-no-preload-934668
	I0116 03:13:33.310897 1011460 main.go:141] libmachine: (no-preload-934668) DBG | I0116 03:13:33.310781 1012976 retry.go:31] will retry after 416.26092ms: waiting for machine to come up
	I0116 03:13:33.728484 1011460 main.go:141] libmachine: (no-preload-934668) DBG | domain no-preload-934668 has defined MAC address 52:54:00:96:89:86 in network mk-no-preload-934668
	I0116 03:13:33.729148 1011460 main.go:141] libmachine: (no-preload-934668) DBG | unable to find current IP address of domain no-preload-934668 in network mk-no-preload-934668
	I0116 03:13:33.729189 1011460 main.go:141] libmachine: (no-preload-934668) DBG | I0116 03:13:33.729011 1012976 retry.go:31] will retry after 608.181162ms: waiting for machine to come up
	I0116 03:13:34.339151 1011460 main.go:141] libmachine: (no-preload-934668) DBG | domain no-preload-934668 has defined MAC address 52:54:00:96:89:86 in network mk-no-preload-934668
	I0116 03:13:34.339614 1011460 main.go:141] libmachine: (no-preload-934668) DBG | unable to find current IP address of domain no-preload-934668 in network mk-no-preload-934668
	I0116 03:13:34.339642 1011460 main.go:141] libmachine: (no-preload-934668) DBG | I0116 03:13:34.339539 1012976 retry.go:31] will retry after 750.260968ms: waiting for machine to come up
	I0116 03:13:35.090870 1011460 main.go:141] libmachine: (no-preload-934668) DBG | domain no-preload-934668 has defined MAC address 52:54:00:96:89:86 in network mk-no-preload-934668
	I0116 03:13:35.091333 1011460 main.go:141] libmachine: (no-preload-934668) DBG | unable to find current IP address of domain no-preload-934668 in network mk-no-preload-934668
	I0116 03:13:35.091362 1011460 main.go:141] libmachine: (no-preload-934668) DBG | I0116 03:13:35.091285 1012976 retry.go:31] will retry after 700.212947ms: waiting for machine to come up
	I0116 03:13:35.793243 1011460 main.go:141] libmachine: (no-preload-934668) DBG | domain no-preload-934668 has defined MAC address 52:54:00:96:89:86 in network mk-no-preload-934668
	I0116 03:13:35.793740 1011460 main.go:141] libmachine: (no-preload-934668) DBG | unable to find current IP address of domain no-preload-934668 in network mk-no-preload-934668
	I0116 03:13:35.793774 1011460 main.go:141] libmachine: (no-preload-934668) DBG | I0116 03:13:35.793633 1012976 retry.go:31] will retry after 743.854004ms: waiting for machine to come up
	I0116 03:13:36.539322 1011460 main.go:141] libmachine: (no-preload-934668) DBG | domain no-preload-934668 has defined MAC address 52:54:00:96:89:86 in network mk-no-preload-934668
	I0116 03:13:36.539985 1011460 main.go:141] libmachine: (no-preload-934668) DBG | unable to find current IP address of domain no-preload-934668 in network mk-no-preload-934668
	I0116 03:13:36.540018 1011460 main.go:141] libmachine: (no-preload-934668) DBG | I0116 03:13:36.539939 1012976 retry.go:31] will retry after 1.305141922s: waiting for machine to come up
	I0116 03:13:34.974062 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:13:37.464767 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:13:37.759482 1011681 api_server.go:269] stopped: https://192.168.39.91:8443/healthz: Get "https://192.168.39.91:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0116 03:13:37.759559 1011681 api_server.go:253] Checking apiserver healthz at https://192.168.39.91:8443/healthz ...
	I0116 03:13:39.188258 1011681 api_server.go:279] https://192.168.39.91:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0116 03:13:39.188300 1011681 api_server.go:103] status: https://192.168.39.91:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0116 03:13:39.188322 1011681 api_server.go:253] Checking apiserver healthz at https://192.168.39.91:8443/healthz ...
	I0116 03:13:39.222005 1011681 api_server.go:279] https://192.168.39.91:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0116 03:13:39.222064 1011681 api_server.go:103] status: https://192.168.39.91:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0116 03:13:39.259251 1011681 api_server.go:253] Checking apiserver healthz at https://192.168.39.91:8443/healthz ...
	I0116 03:13:39.360385 1011681 api_server.go:279] https://192.168.39.91:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0116 03:13:39.360456 1011681 api_server.go:103] status: https://192.168.39.91:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0116 03:13:39.759006 1011681 api_server.go:253] Checking apiserver healthz at https://192.168.39.91:8443/healthz ...
	I0116 03:13:38.432521 1011955 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.327659635s)
	I0116 03:13:38.432570 1011955 crio.go:451] Took 3.327807 seconds to extract the tarball
	I0116 03:13:38.432585 1011955 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0116 03:13:38.477872 1011955 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 03:13:38.535414 1011955 crio.go:496] all images are preloaded for cri-o runtime.
	I0116 03:13:38.535442 1011955 cache_images.go:84] Images are preloaded, skipping loading
	I0116 03:13:38.535510 1011955 ssh_runner.go:195] Run: crio config
	I0116 03:13:38.604605 1011955 cni.go:84] Creating CNI manager for ""
	I0116 03:13:38.604636 1011955 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 03:13:38.604663 1011955 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0116 03:13:38.604690 1011955 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.158 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-775571 NodeName:default-k8s-diff-port-775571 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.158"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.158 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0116 03:13:38.604871 1011955 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.158
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-775571"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.158
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.158"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0116 03:13:38.604946 1011955 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-775571 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.158
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-775571 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0116 03:13:38.605006 1011955 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0116 03:13:38.619020 1011955 binaries.go:44] Found k8s binaries, skipping transfer
	I0116 03:13:38.619106 1011955 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0116 03:13:38.633715 1011955 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (388 bytes)
	I0116 03:13:38.651239 1011955 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0116 03:13:38.670877 1011955 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2115 bytes)
	I0116 03:13:38.689268 1011955 ssh_runner.go:195] Run: grep 192.168.72.158	control-plane.minikube.internal$ /etc/hosts
	I0116 03:13:38.694783 1011955 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.158	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 03:13:38.709936 1011955 certs.go:56] Setting up /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/default-k8s-diff-port-775571 for IP: 192.168.72.158
	I0116 03:13:38.709984 1011955 certs.go:190] acquiring lock for shared ca certs: {Name:mk7c5920652340a030ac1e36e0fe21cc8437c491 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:13:38.710196 1011955 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17967-971255/.minikube/ca.key
	I0116 03:13:38.710269 1011955 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17967-971255/.minikube/proxy-client-ca.key
	I0116 03:13:38.710379 1011955 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/default-k8s-diff-port-775571/client.key
	I0116 03:13:38.710471 1011955 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/default-k8s-diff-port-775571/apiserver.key.6c936bf0
	I0116 03:13:38.710533 1011955 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/default-k8s-diff-port-775571/proxy-client.key
	I0116 03:13:38.710677 1011955 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/home/jenkins/minikube-integration/17967-971255/.minikube/certs/978482.pem (1338 bytes)
	W0116 03:13:38.710717 1011955 certs.go:433] ignoring /home/jenkins/minikube-integration/17967-971255/.minikube/certs/home/jenkins/minikube-integration/17967-971255/.minikube/certs/978482_empty.pem, impossibly tiny 0 bytes
	I0116 03:13:38.710734 1011955 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca-key.pem (1675 bytes)
	I0116 03:13:38.710771 1011955 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca.pem (1082 bytes)
	I0116 03:13:38.710810 1011955 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/home/jenkins/minikube-integration/17967-971255/.minikube/certs/cert.pem (1123 bytes)
	I0116 03:13:38.710849 1011955 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/home/jenkins/minikube-integration/17967-971255/.minikube/certs/key.pem (1675 bytes)
	I0116 03:13:38.710911 1011955 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-971255/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17967-971255/.minikube/files/etc/ssl/certs/9784822.pem (1708 bytes)
	I0116 03:13:38.711657 1011955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/default-k8s-diff-port-775571/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0116 03:13:38.742564 1011955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/default-k8s-diff-port-775571/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0116 03:13:38.770741 1011955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/default-k8s-diff-port-775571/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0116 03:13:38.795401 1011955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/default-k8s-diff-port-775571/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0116 03:13:38.819574 1011955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0116 03:13:38.847962 1011955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0116 03:13:38.872537 1011955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0116 03:13:38.898930 1011955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0116 03:13:38.924558 1011955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0116 03:13:38.950417 1011955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/certs/978482.pem --> /usr/share/ca-certificates/978482.pem (1338 bytes)
	I0116 03:13:38.976115 1011955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/files/etc/ssl/certs/9784822.pem --> /usr/share/ca-certificates/9784822.pem (1708 bytes)
	I0116 03:13:39.008493 1011955 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0116 03:13:39.028392 1011955 ssh_runner.go:195] Run: openssl version
	I0116 03:13:39.034429 1011955 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/978482.pem && ln -fs /usr/share/ca-certificates/978482.pem /etc/ssl/certs/978482.pem"
	I0116 03:13:39.046541 1011955 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/978482.pem
	I0116 03:13:39.051560 1011955 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 16 02:10 /usr/share/ca-certificates/978482.pem
	I0116 03:13:39.051656 1011955 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/978482.pem
	I0116 03:13:39.058169 1011955 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/978482.pem /etc/ssl/certs/51391683.0"
	I0116 03:13:39.072168 1011955 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9784822.pem && ln -fs /usr/share/ca-certificates/9784822.pem /etc/ssl/certs/9784822.pem"
	I0116 03:13:39.086485 1011955 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9784822.pem
	I0116 03:13:39.091108 1011955 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 16 02:10 /usr/share/ca-certificates/9784822.pem
	I0116 03:13:39.091162 1011955 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9784822.pem
	I0116 03:13:39.098393 1011955 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9784822.pem /etc/ssl/certs/3ec20f2e.0"
	I0116 03:13:39.109323 1011955 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0116 03:13:39.121606 1011955 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:13:39.127187 1011955 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 16 02:01 /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:13:39.127263 1011955 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:13:39.134830 1011955 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0116 03:13:39.149731 1011955 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0116 03:13:39.156181 1011955 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0116 03:13:39.164095 1011955 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0116 03:13:39.172662 1011955 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0116 03:13:39.180598 1011955 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0116 03:13:39.188640 1011955 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0116 03:13:39.197249 1011955 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0116 03:13:39.206289 1011955 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-775571 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:default-k8s-diff-port-775571 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.72.158 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extr
aDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 03:13:39.206442 1011955 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0116 03:13:39.206509 1011955 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 03:13:39.259399 1011955 cri.go:89] found id: ""
	I0116 03:13:39.259481 1011955 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0116 03:13:39.273356 1011955 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0116 03:13:39.273385 1011955 kubeadm.go:636] restartCluster start
	I0116 03:13:39.273474 1011955 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0116 03:13:39.287459 1011955 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:39.288748 1011955 kubeconfig.go:92] found "default-k8s-diff-port-775571" server: "https://192.168.72.158:8444"
	I0116 03:13:39.291777 1011955 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0116 03:13:39.304936 1011955 api_server.go:166] Checking apiserver status ...
	I0116 03:13:39.305013 1011955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:39.321035 1011955 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:39.805691 1011955 api_server.go:166] Checking apiserver status ...
	I0116 03:13:39.805843 1011955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:39.821119 1011955 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:40.305352 1011955 api_server.go:166] Checking apiserver status ...
	I0116 03:13:40.305464 1011955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:40.320908 1011955 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:40.205526 1011681 api_server.go:279] https://192.168.39.91:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0116 03:13:40.417347 1011681 api_server.go:103] status: https://192.168.39.91:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0116 03:13:40.417381 1011681 api_server.go:253] Checking apiserver healthz at https://192.168.39.91:8443/healthz ...
	I0116 03:13:40.626819 1011681 api_server.go:279] https://192.168.39.91:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0116 03:13:40.626875 1011681 api_server.go:103] status: https://192.168.39.91:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0116 03:13:40.759016 1011681 api_server.go:253] Checking apiserver healthz at https://192.168.39.91:8443/healthz ...
	I0116 03:13:40.769794 1011681 api_server.go:279] https://192.168.39.91:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0116 03:13:40.769867 1011681 api_server.go:103] status: https://192.168.39.91:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0116 03:13:41.258280 1011681 api_server.go:253] Checking apiserver healthz at https://192.168.39.91:8443/healthz ...
	I0116 03:13:41.268104 1011681 api_server.go:279] https://192.168.39.91:8443/healthz returned 200:
	ok
	I0116 03:13:41.276527 1011681 api_server.go:141] control plane version: v1.16.0
	I0116 03:13:41.276576 1011681 api_server.go:131] duration metric: took 9.018477008s to wait for apiserver health ...
	I0116 03:13:41.276587 1011681 cni.go:84] Creating CNI manager for ""
	I0116 03:13:41.276593 1011681 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 03:13:41.278640 1011681 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0116 03:13:37.847223 1011460 main.go:141] libmachine: (no-preload-934668) DBG | domain no-preload-934668 has defined MAC address 52:54:00:96:89:86 in network mk-no-preload-934668
	I0116 03:13:37.847666 1011460 main.go:141] libmachine: (no-preload-934668) DBG | unable to find current IP address of domain no-preload-934668 in network mk-no-preload-934668
	I0116 03:13:37.847702 1011460 main.go:141] libmachine: (no-preload-934668) DBG | I0116 03:13:37.847614 1012976 retry.go:31] will retry after 1.639650566s: waiting for machine to come up
	I0116 03:13:39.488850 1011460 main.go:141] libmachine: (no-preload-934668) DBG | domain no-preload-934668 has defined MAC address 52:54:00:96:89:86 in network mk-no-preload-934668
	I0116 03:13:39.489197 1011460 main.go:141] libmachine: (no-preload-934668) DBG | unable to find current IP address of domain no-preload-934668 in network mk-no-preload-934668
	I0116 03:13:39.489230 1011460 main.go:141] libmachine: (no-preload-934668) DBG | I0116 03:13:39.489145 1012976 retry.go:31] will retry after 2.106627157s: waiting for machine to come up
	I0116 03:13:41.598019 1011460 main.go:141] libmachine: (no-preload-934668) DBG | domain no-preload-934668 has defined MAC address 52:54:00:96:89:86 in network mk-no-preload-934668
	I0116 03:13:41.598601 1011460 main.go:141] libmachine: (no-preload-934668) DBG | unable to find current IP address of domain no-preload-934668 in network mk-no-preload-934668
	I0116 03:13:41.598635 1011460 main.go:141] libmachine: (no-preload-934668) DBG | I0116 03:13:41.598540 1012976 retry.go:31] will retry after 2.493521899s: waiting for machine to come up
	I0116 03:13:39.963772 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:13:41.965748 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:13:41.280699 1011681 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0116 03:13:41.300296 1011681 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0116 03:13:41.341944 1011681 system_pods.go:43] waiting for kube-system pods to appear ...
	I0116 03:13:41.361578 1011681 system_pods.go:59] 7 kube-system pods found
	I0116 03:13:41.361618 1011681 system_pods.go:61] "coredns-5644d7b6d9-5j7ps" [d1ccd80c-b19b-49ae-bc1c-deee7f0db229] Running
	I0116 03:13:41.361627 1011681 system_pods.go:61] "etcd-old-k8s-version-788237" [4a34c524-dce0-4c01-a1f2-291a59c02044] Running
	I0116 03:13:41.361634 1011681 system_pods.go:61] "kube-apiserver-old-k8s-version-788237" [2b802f72-d63e-423d-ac43-89b836bd4b70] Running
	I0116 03:13:41.361640 1011681 system_pods.go:61] "kube-controller-manager-old-k8s-version-788237" [a41d42f1-0587-4cb6-965f-fffdb8bcde5d] Running
	I0116 03:13:41.361645 1011681 system_pods.go:61] "kube-proxy-vtxjk" [4993e4ef-5193-4632-a61a-a0b38601239d] Running
	I0116 03:13:41.361651 1011681 system_pods.go:61] "kube-scheduler-old-k8s-version-788237" [712a30dc-0217-47d4-88ba-d63f6f2f6d02] Running
	I0116 03:13:41.361662 1011681 system_pods.go:61] "storage-provisioner" [2e43ef59-3c6b-4c78-81ae-71dbd0eaddfd] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0116 03:13:41.361680 1011681 system_pods.go:74] duration metric: took 19.701772ms to wait for pod list to return data ...
	I0116 03:13:41.361698 1011681 node_conditions.go:102] verifying NodePressure condition ...
	I0116 03:13:41.366876 1011681 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 03:13:41.366918 1011681 node_conditions.go:123] node cpu capacity is 2
	I0116 03:13:41.366933 1011681 node_conditions.go:105] duration metric: took 5.228319ms to run NodePressure ...
	I0116 03:13:41.366961 1011681 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:13:41.921064 1011681 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0116 03:13:41.925272 1011681 retry.go:31] will retry after 140.477343ms: kubelet not initialised
	I0116 03:13:42.072065 1011681 retry.go:31] will retry after 346.605533ms: kubelet not initialised
	I0116 03:13:42.428950 1011681 retry.go:31] will retry after 456.811796ms: kubelet not initialised
	I0116 03:13:42.893528 1011681 retry.go:31] will retry after 821.458486ms: kubelet not initialised
	I0116 03:13:43.721228 1011681 retry.go:31] will retry after 1.260888799s: kubelet not initialised
	I0116 03:13:44.988346 1011681 retry.go:31] will retry after 1.183564266s: kubelet not initialised
	I0116 03:13:40.805756 1011955 api_server.go:166] Checking apiserver status ...
	I0116 03:13:40.805890 1011955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:40.823823 1011955 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:41.305065 1011955 api_server.go:166] Checking apiserver status ...
	I0116 03:13:41.305161 1011955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:41.317967 1011955 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:41.805703 1011955 api_server.go:166] Checking apiserver status ...
	I0116 03:13:41.805813 1011955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:41.819698 1011955 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:42.305067 1011955 api_server.go:166] Checking apiserver status ...
	I0116 03:13:42.305209 1011955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:42.318643 1011955 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:42.805284 1011955 api_server.go:166] Checking apiserver status ...
	I0116 03:13:42.805381 1011955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:42.821975 1011955 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:43.305106 1011955 api_server.go:166] Checking apiserver status ...
	I0116 03:13:43.305234 1011955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:43.318457 1011955 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:43.805741 1011955 api_server.go:166] Checking apiserver status ...
	I0116 03:13:43.805902 1011955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:43.820562 1011955 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:44.305077 1011955 api_server.go:166] Checking apiserver status ...
	I0116 03:13:44.305217 1011955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:44.322452 1011955 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:44.805978 1011955 api_server.go:166] Checking apiserver status ...
	I0116 03:13:44.806111 1011955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:44.822302 1011955 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:45.305330 1011955 api_server.go:166] Checking apiserver status ...
	I0116 03:13:45.305432 1011955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:45.317788 1011955 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:44.095061 1011460 main.go:141] libmachine: (no-preload-934668) DBG | domain no-preload-934668 has defined MAC address 52:54:00:96:89:86 in network mk-no-preload-934668
	I0116 03:13:44.095629 1011460 main.go:141] libmachine: (no-preload-934668) DBG | unable to find current IP address of domain no-preload-934668 in network mk-no-preload-934668
	I0116 03:13:44.095658 1011460 main.go:141] libmachine: (no-preload-934668) DBG | I0116 03:13:44.095576 1012976 retry.go:31] will retry after 3.106364447s: waiting for machine to come up
	I0116 03:13:47.203798 1011460 main.go:141] libmachine: (no-preload-934668) DBG | domain no-preload-934668 has defined MAC address 52:54:00:96:89:86 in network mk-no-preload-934668
	I0116 03:13:47.204278 1011460 main.go:141] libmachine: (no-preload-934668) DBG | unable to find current IP address of domain no-preload-934668 in network mk-no-preload-934668
	I0116 03:13:47.204310 1011460 main.go:141] libmachine: (no-preload-934668) DBG | I0116 03:13:47.204216 1012976 retry.go:31] will retry after 3.186263998s: waiting for machine to come up
	I0116 03:13:44.462154 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:13:46.467556 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:13:46.177475 1011681 retry.go:31] will retry after 2.879508446s: kubelet not initialised
	I0116 03:13:49.062319 1011681 retry.go:31] will retry after 3.01676683s: kubelet not initialised
	I0116 03:13:45.805770 1011955 api_server.go:166] Checking apiserver status ...
	I0116 03:13:45.805896 1011955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:45.822222 1011955 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:46.305853 1011955 api_server.go:166] Checking apiserver status ...
	I0116 03:13:46.305977 1011955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:46.322927 1011955 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:46.805392 1011955 api_server.go:166] Checking apiserver status ...
	I0116 03:13:46.805501 1011955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:46.822012 1011955 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:47.305518 1011955 api_server.go:166] Checking apiserver status ...
	I0116 03:13:47.305634 1011955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:47.322371 1011955 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:47.805932 1011955 api_server.go:166] Checking apiserver status ...
	I0116 03:13:47.806027 1011955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:47.821119 1011955 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:48.305696 1011955 api_server.go:166] Checking apiserver status ...
	I0116 03:13:48.305832 1011955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:48.318366 1011955 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:48.805946 1011955 api_server.go:166] Checking apiserver status ...
	I0116 03:13:48.806039 1011955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:48.819066 1011955 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:49.305780 1011955 api_server.go:166] Checking apiserver status ...
	I0116 03:13:49.305922 1011955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:13:49.318542 1011955 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:13:49.318576 1011955 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0116 03:13:49.318588 1011955 kubeadm.go:1135] stopping kube-system containers ...
	I0116 03:13:49.318602 1011955 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0116 03:13:49.318663 1011955 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 03:13:49.361552 1011955 cri.go:89] found id: ""
	I0116 03:13:49.361636 1011955 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0116 03:13:49.378478 1011955 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0116 03:13:49.389158 1011955 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0116 03:13:49.389248 1011955 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0116 03:13:49.398973 1011955 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0116 03:13:49.399019 1011955 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:13:49.516974 1011955 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:13:50.394812 1011460 main.go:141] libmachine: (no-preload-934668) DBG | domain no-preload-934668 has defined MAC address 52:54:00:96:89:86 in network mk-no-preload-934668
	I0116 03:13:50.395295 1011460 main.go:141] libmachine: (no-preload-934668) DBG | domain no-preload-934668 has current primary IP address 192.168.50.29 and MAC address 52:54:00:96:89:86 in network mk-no-preload-934668
	I0116 03:13:50.395323 1011460 main.go:141] libmachine: (no-preload-934668) Found IP for machine: 192.168.50.29
	I0116 03:13:50.395338 1011460 main.go:141] libmachine: (no-preload-934668) Reserving static IP address...
	I0116 03:13:50.395804 1011460 main.go:141] libmachine: (no-preload-934668) DBG | found host DHCP lease matching {name: "no-preload-934668", mac: "52:54:00:96:89:86", ip: "192.168.50.29"} in network mk-no-preload-934668: {Iface:virbr2 ExpiryTime:2024-01-16 04:13:45 +0000 UTC Type:0 Mac:52:54:00:96:89:86 Iaid: IPaddr:192.168.50.29 Prefix:24 Hostname:no-preload-934668 Clientid:01:52:54:00:96:89:86}
	I0116 03:13:50.395830 1011460 main.go:141] libmachine: (no-preload-934668) Reserved static IP address: 192.168.50.29
	I0116 03:13:50.395851 1011460 main.go:141] libmachine: (no-preload-934668) DBG | skip adding static IP to network mk-no-preload-934668 - found existing host DHCP lease matching {name: "no-preload-934668", mac: "52:54:00:96:89:86", ip: "192.168.50.29"}
	I0116 03:13:50.395880 1011460 main.go:141] libmachine: (no-preload-934668) DBG | Getting to WaitForSSH function...
	I0116 03:13:50.395898 1011460 main.go:141] libmachine: (no-preload-934668) Waiting for SSH to be available...
	I0116 03:13:50.398256 1011460 main.go:141] libmachine: (no-preload-934668) DBG | domain no-preload-934668 has defined MAC address 52:54:00:96:89:86 in network mk-no-preload-934668
	I0116 03:13:50.398608 1011460 main.go:141] libmachine: (no-preload-934668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:89:86", ip: ""} in network mk-no-preload-934668: {Iface:virbr2 ExpiryTime:2024-01-16 04:13:45 +0000 UTC Type:0 Mac:52:54:00:96:89:86 Iaid: IPaddr:192.168.50.29 Prefix:24 Hostname:no-preload-934668 Clientid:01:52:54:00:96:89:86}
	I0116 03:13:50.398652 1011460 main.go:141] libmachine: (no-preload-934668) DBG | domain no-preload-934668 has defined IP address 192.168.50.29 and MAC address 52:54:00:96:89:86 in network mk-no-preload-934668
	I0116 03:13:50.398838 1011460 main.go:141] libmachine: (no-preload-934668) DBG | Using SSH client type: external
	I0116 03:13:50.398864 1011460 main.go:141] libmachine: (no-preload-934668) DBG | Using SSH private key: /home/jenkins/minikube-integration/17967-971255/.minikube/machines/no-preload-934668/id_rsa (-rw-------)
	I0116 03:13:50.398917 1011460 main.go:141] libmachine: (no-preload-934668) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.29 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17967-971255/.minikube/machines/no-preload-934668/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0116 03:13:50.398936 1011460 main.go:141] libmachine: (no-preload-934668) DBG | About to run SSH command:
	I0116 03:13:50.398949 1011460 main.go:141] libmachine: (no-preload-934668) DBG | exit 0
	I0116 03:13:50.489493 1011460 main.go:141] libmachine: (no-preload-934668) DBG | SSH cmd err, output: <nil>: 
	I0116 03:13:50.489954 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetConfigRaw
	I0116 03:13:50.490626 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetIP
	I0116 03:13:50.493468 1011460 main.go:141] libmachine: (no-preload-934668) DBG | domain no-preload-934668 has defined MAC address 52:54:00:96:89:86 in network mk-no-preload-934668
	I0116 03:13:50.493892 1011460 main.go:141] libmachine: (no-preload-934668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:89:86", ip: ""} in network mk-no-preload-934668: {Iface:virbr2 ExpiryTime:2024-01-16 04:13:45 +0000 UTC Type:0 Mac:52:54:00:96:89:86 Iaid: IPaddr:192.168.50.29 Prefix:24 Hostname:no-preload-934668 Clientid:01:52:54:00:96:89:86}
	I0116 03:13:50.493943 1011460 main.go:141] libmachine: (no-preload-934668) DBG | domain no-preload-934668 has defined IP address 192.168.50.29 and MAC address 52:54:00:96:89:86 in network mk-no-preload-934668
	I0116 03:13:50.494329 1011460 profile.go:148] Saving config to /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/no-preload-934668/config.json ...
	I0116 03:13:50.494545 1011460 machine.go:88] provisioning docker machine ...
	I0116 03:13:50.494566 1011460 main.go:141] libmachine: (no-preload-934668) Calling .DriverName
	I0116 03:13:50.494837 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetMachineName
	I0116 03:13:50.495038 1011460 buildroot.go:166] provisioning hostname "no-preload-934668"
	I0116 03:13:50.495067 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetMachineName
	I0116 03:13:50.495216 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetSSHHostname
	I0116 03:13:50.497623 1011460 main.go:141] libmachine: (no-preload-934668) DBG | domain no-preload-934668 has defined MAC address 52:54:00:96:89:86 in network mk-no-preload-934668
	I0116 03:13:50.498048 1011460 main.go:141] libmachine: (no-preload-934668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:89:86", ip: ""} in network mk-no-preload-934668: {Iface:virbr2 ExpiryTime:2024-01-16 04:13:45 +0000 UTC Type:0 Mac:52:54:00:96:89:86 Iaid: IPaddr:192.168.50.29 Prefix:24 Hostname:no-preload-934668 Clientid:01:52:54:00:96:89:86}
	I0116 03:13:50.498068 1011460 main.go:141] libmachine: (no-preload-934668) DBG | domain no-preload-934668 has defined IP address 192.168.50.29 and MAC address 52:54:00:96:89:86 in network mk-no-preload-934668
	I0116 03:13:50.498226 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetSSHPort
	I0116 03:13:50.498413 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetSSHKeyPath
	I0116 03:13:50.498569 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetSSHKeyPath
	I0116 03:13:50.498711 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetSSHUsername
	I0116 03:13:50.498887 1011460 main.go:141] libmachine: Using SSH client type: native
	I0116 03:13:50.499381 1011460 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.50.29 22 <nil> <nil>}
	I0116 03:13:50.499400 1011460 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-934668 && echo "no-preload-934668" | sudo tee /etc/hostname
	I0116 03:13:50.632759 1011460 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-934668
	
	I0116 03:13:50.632795 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetSSHHostname
	I0116 03:13:50.636057 1011460 main.go:141] libmachine: (no-preload-934668) DBG | domain no-preload-934668 has defined MAC address 52:54:00:96:89:86 in network mk-no-preload-934668
	I0116 03:13:50.636489 1011460 main.go:141] libmachine: (no-preload-934668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:89:86", ip: ""} in network mk-no-preload-934668: {Iface:virbr2 ExpiryTime:2024-01-16 04:13:45 +0000 UTC Type:0 Mac:52:54:00:96:89:86 Iaid: IPaddr:192.168.50.29 Prefix:24 Hostname:no-preload-934668 Clientid:01:52:54:00:96:89:86}
	I0116 03:13:50.636523 1011460 main.go:141] libmachine: (no-preload-934668) DBG | domain no-preload-934668 has defined IP address 192.168.50.29 and MAC address 52:54:00:96:89:86 in network mk-no-preload-934668
	I0116 03:13:50.636684 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetSSHPort
	I0116 03:13:50.636965 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetSSHKeyPath
	I0116 03:13:50.637189 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetSSHKeyPath
	I0116 03:13:50.637383 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetSSHUsername
	I0116 03:13:50.637560 1011460 main.go:141] libmachine: Using SSH client type: native
	I0116 03:13:50.637994 1011460 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.50.29 22 <nil> <nil>}
	I0116 03:13:50.638021 1011460 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-934668' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-934668/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-934668' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0116 03:13:50.765312 1011460 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0116 03:13:50.765351 1011460 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17967-971255/.minikube CaCertPath:/home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17967-971255/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17967-971255/.minikube}
	I0116 03:13:50.765380 1011460 buildroot.go:174] setting up certificates
	I0116 03:13:50.765395 1011460 provision.go:83] configureAuth start
	I0116 03:13:50.765408 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetMachineName
	I0116 03:13:50.765746 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetIP
	I0116 03:13:50.769190 1011460 main.go:141] libmachine: (no-preload-934668) DBG | domain no-preload-934668 has defined MAC address 52:54:00:96:89:86 in network mk-no-preload-934668
	I0116 03:13:50.769597 1011460 main.go:141] libmachine: (no-preload-934668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:89:86", ip: ""} in network mk-no-preload-934668: {Iface:virbr2 ExpiryTime:2024-01-16 04:13:45 +0000 UTC Type:0 Mac:52:54:00:96:89:86 Iaid: IPaddr:192.168.50.29 Prefix:24 Hostname:no-preload-934668 Clientid:01:52:54:00:96:89:86}
	I0116 03:13:50.769670 1011460 main.go:141] libmachine: (no-preload-934668) DBG | domain no-preload-934668 has defined IP address 192.168.50.29 and MAC address 52:54:00:96:89:86 in network mk-no-preload-934668
	I0116 03:13:50.769902 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetSSHHostname
	I0116 03:13:50.772879 1011460 main.go:141] libmachine: (no-preload-934668) DBG | domain no-preload-934668 has defined MAC address 52:54:00:96:89:86 in network mk-no-preload-934668
	I0116 03:13:50.773334 1011460 main.go:141] libmachine: (no-preload-934668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:89:86", ip: ""} in network mk-no-preload-934668: {Iface:virbr2 ExpiryTime:2024-01-16 04:13:45 +0000 UTC Type:0 Mac:52:54:00:96:89:86 Iaid: IPaddr:192.168.50.29 Prefix:24 Hostname:no-preload-934668 Clientid:01:52:54:00:96:89:86}
	I0116 03:13:50.773367 1011460 main.go:141] libmachine: (no-preload-934668) DBG | domain no-preload-934668 has defined IP address 192.168.50.29 and MAC address 52:54:00:96:89:86 in network mk-no-preload-934668
	I0116 03:13:50.773660 1011460 provision.go:138] copyHostCerts
	I0116 03:13:50.773750 1011460 exec_runner.go:144] found /home/jenkins/minikube-integration/17967-971255/.minikube/ca.pem, removing ...
	I0116 03:13:50.773766 1011460 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17967-971255/.minikube/ca.pem
	I0116 03:13:50.773868 1011460 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17967-971255/.minikube/ca.pem (1082 bytes)
	I0116 03:13:50.774025 1011460 exec_runner.go:144] found /home/jenkins/minikube-integration/17967-971255/.minikube/cert.pem, removing ...
	I0116 03:13:50.774043 1011460 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17967-971255/.minikube/cert.pem
	I0116 03:13:50.774077 1011460 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17967-971255/.minikube/cert.pem (1123 bytes)
	I0116 03:13:50.774174 1011460 exec_runner.go:144] found /home/jenkins/minikube-integration/17967-971255/.minikube/key.pem, removing ...
	I0116 03:13:50.774187 1011460 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17967-971255/.minikube/key.pem
	I0116 03:13:50.774221 1011460 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17967-971255/.minikube/key.pem (1675 bytes)
	I0116 03:13:50.774317 1011460 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17967-971255/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca-key.pem org=jenkins.no-preload-934668 san=[192.168.50.29 192.168.50.29 localhost 127.0.0.1 minikube no-preload-934668]
	I0116 03:13:50.955273 1011460 provision.go:172] copyRemoteCerts
	I0116 03:13:50.955364 1011460 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0116 03:13:50.955404 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetSSHHostname
	I0116 03:13:50.958601 1011460 main.go:141] libmachine: (no-preload-934668) DBG | domain no-preload-934668 has defined MAC address 52:54:00:96:89:86 in network mk-no-preload-934668
	I0116 03:13:50.958977 1011460 main.go:141] libmachine: (no-preload-934668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:89:86", ip: ""} in network mk-no-preload-934668: {Iface:virbr2 ExpiryTime:2024-01-16 04:13:45 +0000 UTC Type:0 Mac:52:54:00:96:89:86 Iaid: IPaddr:192.168.50.29 Prefix:24 Hostname:no-preload-934668 Clientid:01:52:54:00:96:89:86}
	I0116 03:13:50.959013 1011460 main.go:141] libmachine: (no-preload-934668) DBG | domain no-preload-934668 has defined IP address 192.168.50.29 and MAC address 52:54:00:96:89:86 in network mk-no-preload-934668
	I0116 03:13:50.959258 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetSSHPort
	I0116 03:13:50.959495 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetSSHKeyPath
	I0116 03:13:50.959704 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetSSHUsername
	I0116 03:13:50.959878 1011460 sshutil.go:53] new ssh client: &{IP:192.168.50.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/no-preload-934668/id_rsa Username:docker}
	I0116 03:13:51.047852 1011460 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0116 03:13:51.079250 1011460 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0116 03:13:51.110170 1011460 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0116 03:13:51.137342 1011460 provision.go:86] duration metric: configureAuth took 371.929858ms
	I0116 03:13:51.137376 1011460 buildroot.go:189] setting minikube options for container-runtime
	I0116 03:13:51.137602 1011460 config.go:182] Loaded profile config "no-preload-934668": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0116 03:13:51.137690 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetSSHHostname
	I0116 03:13:51.140451 1011460 main.go:141] libmachine: (no-preload-934668) DBG | domain no-preload-934668 has defined MAC address 52:54:00:96:89:86 in network mk-no-preload-934668
	I0116 03:13:51.140935 1011460 main.go:141] libmachine: (no-preload-934668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:89:86", ip: ""} in network mk-no-preload-934668: {Iface:virbr2 ExpiryTime:2024-01-16 04:13:45 +0000 UTC Type:0 Mac:52:54:00:96:89:86 Iaid: IPaddr:192.168.50.29 Prefix:24 Hostname:no-preload-934668 Clientid:01:52:54:00:96:89:86}
	I0116 03:13:51.140963 1011460 main.go:141] libmachine: (no-preload-934668) DBG | domain no-preload-934668 has defined IP address 192.168.50.29 and MAC address 52:54:00:96:89:86 in network mk-no-preload-934668
	I0116 03:13:51.141217 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetSSHPort
	I0116 03:13:51.141435 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetSSHKeyPath
	I0116 03:13:51.141604 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetSSHKeyPath
	I0116 03:13:51.141726 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetSSHUsername
	I0116 03:13:51.141913 1011460 main.go:141] libmachine: Using SSH client type: native
	I0116 03:13:51.142238 1011460 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.50.29 22 <nil> <nil>}
	I0116 03:13:51.142267 1011460 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0116 03:13:51.468734 1011460 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0116 03:13:51.468771 1011460 machine.go:91] provisioned docker machine in 974.21023ms
	I0116 03:13:51.468786 1011460 start.go:300] post-start starting for "no-preload-934668" (driver="kvm2")
	I0116 03:13:51.468803 1011460 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0116 03:13:51.468828 1011460 main.go:141] libmachine: (no-preload-934668) Calling .DriverName
	I0116 03:13:51.469200 1011460 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0116 03:13:51.469228 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetSSHHostname
	I0116 03:13:51.472154 1011460 main.go:141] libmachine: (no-preload-934668) DBG | domain no-preload-934668 has defined MAC address 52:54:00:96:89:86 in network mk-no-preload-934668
	I0116 03:13:51.472614 1011460 main.go:141] libmachine: (no-preload-934668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:89:86", ip: ""} in network mk-no-preload-934668: {Iface:virbr2 ExpiryTime:2024-01-16 04:13:45 +0000 UTC Type:0 Mac:52:54:00:96:89:86 Iaid: IPaddr:192.168.50.29 Prefix:24 Hostname:no-preload-934668 Clientid:01:52:54:00:96:89:86}
	I0116 03:13:51.472665 1011460 main.go:141] libmachine: (no-preload-934668) DBG | domain no-preload-934668 has defined IP address 192.168.50.29 and MAC address 52:54:00:96:89:86 in network mk-no-preload-934668
	I0116 03:13:51.472794 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetSSHPort
	I0116 03:13:51.472991 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetSSHKeyPath
	I0116 03:13:51.473167 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetSSHUsername
	I0116 03:13:51.473321 1011460 sshutil.go:53] new ssh client: &{IP:192.168.50.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/no-preload-934668/id_rsa Username:docker}
	I0116 03:13:51.558257 1011460 ssh_runner.go:195] Run: cat /etc/os-release
	I0116 03:13:51.563146 1011460 info.go:137] Remote host: Buildroot 2021.02.12
	I0116 03:13:51.563178 1011460 filesync.go:126] Scanning /home/jenkins/minikube-integration/17967-971255/.minikube/addons for local assets ...
	I0116 03:13:51.563243 1011460 filesync.go:126] Scanning /home/jenkins/minikube-integration/17967-971255/.minikube/files for local assets ...
	I0116 03:13:51.563339 1011460 filesync.go:149] local asset: /home/jenkins/minikube-integration/17967-971255/.minikube/files/etc/ssl/certs/9784822.pem -> 9784822.pem in /etc/ssl/certs
	I0116 03:13:51.563437 1011460 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0116 03:13:51.574145 1011460 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/files/etc/ssl/certs/9784822.pem --> /etc/ssl/certs/9784822.pem (1708 bytes)
	I0116 03:13:51.603071 1011460 start.go:303] post-start completed in 134.264931ms
	I0116 03:13:51.603104 1011460 fix.go:56] fixHost completed within 20.322632188s
	I0116 03:13:51.603128 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetSSHHostname
	I0116 03:13:51.606596 1011460 main.go:141] libmachine: (no-preload-934668) DBG | domain no-preload-934668 has defined MAC address 52:54:00:96:89:86 in network mk-no-preload-934668
	I0116 03:13:51.607040 1011460 main.go:141] libmachine: (no-preload-934668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:89:86", ip: ""} in network mk-no-preload-934668: {Iface:virbr2 ExpiryTime:2024-01-16 04:13:45 +0000 UTC Type:0 Mac:52:54:00:96:89:86 Iaid: IPaddr:192.168.50.29 Prefix:24 Hostname:no-preload-934668 Clientid:01:52:54:00:96:89:86}
	I0116 03:13:51.607094 1011460 main.go:141] libmachine: (no-preload-934668) DBG | domain no-preload-934668 has defined IP address 192.168.50.29 and MAC address 52:54:00:96:89:86 in network mk-no-preload-934668
	I0116 03:13:51.607312 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetSSHPort
	I0116 03:13:51.607554 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetSSHKeyPath
	I0116 03:13:51.607710 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetSSHKeyPath
	I0116 03:13:51.607896 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetSSHUsername
	I0116 03:13:51.608107 1011460 main.go:141] libmachine: Using SSH client type: native
	I0116 03:13:51.608461 1011460 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.50.29 22 <nil> <nil>}
	I0116 03:13:51.608472 1011460 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0116 03:13:51.724098 1011460 main.go:141] libmachine: SSH cmd err, output: <nil>: 1705374831.664998093
	
	I0116 03:13:51.724128 1011460 fix.go:206] guest clock: 1705374831.664998093
	I0116 03:13:51.724137 1011460 fix.go:219] Guest: 2024-01-16 03:13:51.664998093 +0000 UTC Remote: 2024-01-16 03:13:51.60310878 +0000 UTC m=+359.363375393 (delta=61.889313ms)
	I0116 03:13:51.724164 1011460 fix.go:190] guest clock delta is within tolerance: 61.889313ms
	I0116 03:13:51.724171 1011460 start.go:83] releasing machines lock for "no-preload-934668", held for 20.443784472s
	I0116 03:13:51.724202 1011460 main.go:141] libmachine: (no-preload-934668) Calling .DriverName
	I0116 03:13:51.724534 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetIP
	I0116 03:13:51.727999 1011460 main.go:141] libmachine: (no-preload-934668) DBG | domain no-preload-934668 has defined MAC address 52:54:00:96:89:86 in network mk-no-preload-934668
	I0116 03:13:51.728527 1011460 main.go:141] libmachine: (no-preload-934668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:89:86", ip: ""} in network mk-no-preload-934668: {Iface:virbr2 ExpiryTime:2024-01-16 04:13:45 +0000 UTC Type:0 Mac:52:54:00:96:89:86 Iaid: IPaddr:192.168.50.29 Prefix:24 Hostname:no-preload-934668 Clientid:01:52:54:00:96:89:86}
	I0116 03:13:51.728562 1011460 main.go:141] libmachine: (no-preload-934668) DBG | domain no-preload-934668 has defined IP address 192.168.50.29 and MAC address 52:54:00:96:89:86 in network mk-no-preload-934668
	I0116 03:13:51.728809 1011460 main.go:141] libmachine: (no-preload-934668) Calling .DriverName
	I0116 03:13:51.729469 1011460 main.go:141] libmachine: (no-preload-934668) Calling .DriverName
	I0116 03:13:51.729704 1011460 main.go:141] libmachine: (no-preload-934668) Calling .DriverName
	I0116 03:13:51.729819 1011460 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0116 03:13:51.729869 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetSSHHostname
	I0116 03:13:51.729958 1011460 ssh_runner.go:195] Run: cat /version.json
	I0116 03:13:51.729976 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetSSHHostname
	I0116 03:13:51.732965 1011460 main.go:141] libmachine: (no-preload-934668) DBG | domain no-preload-934668 has defined MAC address 52:54:00:96:89:86 in network mk-no-preload-934668
	I0116 03:13:51.733095 1011460 main.go:141] libmachine: (no-preload-934668) DBG | domain no-preload-934668 has defined MAC address 52:54:00:96:89:86 in network mk-no-preload-934668
	I0116 03:13:51.733424 1011460 main.go:141] libmachine: (no-preload-934668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:89:86", ip: ""} in network mk-no-preload-934668: {Iface:virbr2 ExpiryTime:2024-01-16 04:13:45 +0000 UTC Type:0 Mac:52:54:00:96:89:86 Iaid: IPaddr:192.168.50.29 Prefix:24 Hostname:no-preload-934668 Clientid:01:52:54:00:96:89:86}
	I0116 03:13:51.733451 1011460 main.go:141] libmachine: (no-preload-934668) DBG | domain no-preload-934668 has defined IP address 192.168.50.29 and MAC address 52:54:00:96:89:86 in network mk-no-preload-934668
	I0116 03:13:51.733528 1011460 main.go:141] libmachine: (no-preload-934668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:89:86", ip: ""} in network mk-no-preload-934668: {Iface:virbr2 ExpiryTime:2024-01-16 04:13:45 +0000 UTC Type:0 Mac:52:54:00:96:89:86 Iaid: IPaddr:192.168.50.29 Prefix:24 Hostname:no-preload-934668 Clientid:01:52:54:00:96:89:86}
	I0116 03:13:51.733550 1011460 main.go:141] libmachine: (no-preload-934668) DBG | domain no-preload-934668 has defined IP address 192.168.50.29 and MAC address 52:54:00:96:89:86 in network mk-no-preload-934668
	I0116 03:13:51.733591 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetSSHPort
	I0116 03:13:51.733725 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetSSHPort
	I0116 03:13:51.733841 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetSSHKeyPath
	I0116 03:13:51.733972 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetSSHKeyPath
	I0116 03:13:51.733998 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetSSHUsername
	I0116 03:13:51.734170 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetSSHUsername
	I0116 03:13:51.734205 1011460 sshutil.go:53] new ssh client: &{IP:192.168.50.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/no-preload-934668/id_rsa Username:docker}
	I0116 03:13:51.734306 1011460 sshutil.go:53] new ssh client: &{IP:192.168.50.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/no-preload-934668/id_rsa Username:docker}
	I0116 03:13:51.819882 1011460 ssh_runner.go:195] Run: systemctl --version
	I0116 03:13:51.848935 1011460 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0116 03:13:52.005460 1011460 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0116 03:13:52.012691 1011460 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0116 03:13:52.012799 1011460 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0116 03:13:52.031857 1011460 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0116 03:13:52.031884 1011460 start.go:475] detecting cgroup driver to use...
	I0116 03:13:52.031950 1011460 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0116 03:13:52.049305 1011460 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0116 03:13:52.063332 1011460 docker.go:217] disabling cri-docker service (if available) ...
	I0116 03:13:52.063407 1011460 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0116 03:13:52.080341 1011460 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0116 03:13:52.099750 1011460 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0116 03:13:52.241916 1011460 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0116 03:13:52.374908 1011460 docker.go:233] disabling docker service ...
	I0116 03:13:52.375010 1011460 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0116 03:13:52.393531 1011460 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0116 03:13:52.410744 1011460 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0116 03:13:52.545990 1011460 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0116 03:13:52.677872 1011460 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0116 03:13:52.692652 1011460 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0116 03:13:52.711774 1011460 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0116 03:13:52.711871 1011460 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:13:52.722079 1011460 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0116 03:13:52.722179 1011460 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:13:52.732784 1011460 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:13:52.742863 1011460 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:13:52.752987 1011460 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0116 03:13:52.764401 1011460 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0116 03:13:52.773584 1011460 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0116 03:13:52.773668 1011460 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0116 03:13:52.787400 1011460 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0116 03:13:52.798262 1011460 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0116 03:13:52.928159 1011460 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0116 03:13:53.106967 1011460 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0116 03:13:53.107069 1011460 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0116 03:13:53.112312 1011460 start.go:543] Will wait 60s for crictl version
	I0116 03:13:53.112387 1011460 ssh_runner.go:195] Run: which crictl
	I0116 03:13:53.116701 1011460 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0116 03:13:53.166149 1011460 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0116 03:13:53.166246 1011460 ssh_runner.go:195] Run: crio --version
	I0116 03:13:53.227306 1011460 ssh_runner.go:195] Run: crio --version
	I0116 03:13:53.289601 1011460 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.24.1 ...
	I0116 03:13:48.961681 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:13:50.969620 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:13:53.462450 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:13:52.085958 1011681 retry.go:31] will retry after 4.051731251s: kubelet not initialised
	I0116 03:13:50.527883 1011955 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.010858065s)
	I0116 03:13:50.527951 1011955 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:13:50.734058 1011955 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:13:50.824872 1011955 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:13:50.919552 1011955 api_server.go:52] waiting for apiserver process to appear ...
	I0116 03:13:50.919679 1011955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:13:51.420316 1011955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:13:51.920460 1011955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:13:52.419846 1011955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:13:52.920241 1011955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:13:53.419933 1011955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:13:53.920527 1011955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:13:53.948958 1011955 api_server.go:72] duration metric: took 3.029405367s to wait for apiserver process to appear ...
	I0116 03:13:53.948990 1011955 api_server.go:88] waiting for apiserver healthz status ...
	I0116 03:13:53.949018 1011955 api_server.go:253] Checking apiserver healthz at https://192.168.72.158:8444/healthz ...
	I0116 03:13:53.291126 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetIP
	I0116 03:13:53.294326 1011460 main.go:141] libmachine: (no-preload-934668) DBG | domain no-preload-934668 has defined MAC address 52:54:00:96:89:86 in network mk-no-preload-934668
	I0116 03:13:53.294780 1011460 main.go:141] libmachine: (no-preload-934668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:89:86", ip: ""} in network mk-no-preload-934668: {Iface:virbr2 ExpiryTime:2024-01-16 04:13:45 +0000 UTC Type:0 Mac:52:54:00:96:89:86 Iaid: IPaddr:192.168.50.29 Prefix:24 Hostname:no-preload-934668 Clientid:01:52:54:00:96:89:86}
	I0116 03:13:53.294833 1011460 main.go:141] libmachine: (no-preload-934668) DBG | domain no-preload-934668 has defined IP address 192.168.50.29 and MAC address 52:54:00:96:89:86 in network mk-no-preload-934668
	I0116 03:13:53.295093 1011460 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0116 03:13:53.300971 1011460 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 03:13:53.316040 1011460 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0116 03:13:53.316107 1011460 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 03:13:53.368111 1011460 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0116 03:13:53.368138 1011460 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.2 registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 registry.k8s.io/kube-scheduler:v1.29.0-rc.2 registry.k8s.io/kube-proxy:v1.29.0-rc.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0116 03:13:53.368196 1011460 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 03:13:53.368485 1011460 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0116 03:13:53.368569 1011460 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I0116 03:13:53.368584 1011460 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0116 03:13:53.368596 1011460 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0116 03:13:53.368607 1011460 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0116 03:13:53.368626 1011460 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0116 03:13:53.368669 1011460 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0116 03:13:53.370675 1011460 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I0116 03:13:53.370735 1011460 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0116 03:13:53.371123 1011460 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0116 03:13:53.371132 1011460 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0116 03:13:53.371191 1011460 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0116 03:13:53.371333 1011460 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0116 03:13:53.371456 1011460 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 03:13:53.371815 1011460 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0116 03:13:53.515854 1011460 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I0116 03:13:53.524922 1011460 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0116 03:13:53.531697 1011460 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0116 03:13:53.540206 1011460 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0116 03:13:53.543219 1011460 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0116 03:13:53.546913 1011460 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0116 03:13:53.580609 1011460 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0116 03:13:53.610214 1011460 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I0116 03:13:53.610281 1011460 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I0116 03:13:53.610353 1011460 ssh_runner.go:195] Run: which crictl
	I0116 03:13:53.677663 1011460 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 03:13:53.687535 1011460 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" does not exist at hash "bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f" in container runtime
	I0116 03:13:53.687595 1011460 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0116 03:13:53.687599 1011460 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" does not exist at hash "4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210" in container runtime
	I0116 03:13:53.687638 1011460 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0116 03:13:53.687667 1011460 ssh_runner.go:195] Run: which crictl
	I0116 03:13:53.687717 1011460 ssh_runner.go:195] Run: which crictl
	I0116 03:13:53.862729 1011460 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.2" does not exist at hash "cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834" in container runtime
	I0116 03:13:53.862804 1011460 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0116 03:13:53.862830 1011460 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" does not exist at hash "d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d" in container runtime
	I0116 03:13:53.862929 1011460 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0116 03:13:53.863101 1011460 ssh_runner.go:195] Run: which crictl
	I0116 03:13:53.863151 1011460 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0116 03:13:53.862947 1011460 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0116 03:13:53.863216 1011460 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0116 03:13:53.863098 1011460 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0116 03:13:53.863245 1011460 ssh_runner.go:195] Run: which crictl
	I0116 03:13:53.863264 1011460 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 03:13:53.862873 1011460 ssh_runner.go:195] Run: which crictl
	I0116 03:13:53.863311 1011460 ssh_runner.go:195] Run: which crictl
	I0116 03:13:53.863060 1011460 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I0116 03:13:53.863156 1011460 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0116 03:13:53.928805 1011460 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0116 03:13:53.968913 1011460 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17967-971255/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I0116 03:13:53.969132 1011460 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I0116 03:13:53.974631 1011460 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17967-971255/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I0116 03:13:53.974701 1011460 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17967-971255/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I0116 03:13:53.974754 1011460 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0116 03:13:53.974928 1011460 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0116 03:13:53.974792 1011460 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0116 03:13:53.974818 1011460 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0116 03:13:53.974833 1011460 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 03:13:54.018085 1011460 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17967-971255/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I0116 03:13:54.018198 1011460 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0116 03:13:54.018288 1011460 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I0116 03:13:54.018300 1011460 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I0116 03:13:54.018326 1011460 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I0116 03:13:54.086983 1011460 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17967-971255/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0116 03:13:54.087041 1011460 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17967-971255/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0116 03:13:54.087074 1011460 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17967-971255/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I0116 03:13:54.087111 1011460 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0116 03:13:54.087147 1011460 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0116 03:13:54.087148 1011460 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0116 03:13:54.087203 1011460 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2 (exists)
	I0116 03:13:54.087245 1011460 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2 (exists)
	I0116 03:13:55.466435 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:13:57.968591 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:13:57.859025 1011955 api_server.go:279] https://192.168.72.158:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0116 03:13:57.859081 1011955 api_server.go:103] status: https://192.168.72.158:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0116 03:13:57.859100 1011955 api_server.go:253] Checking apiserver healthz at https://192.168.72.158:8444/healthz ...
	I0116 03:13:57.949519 1011955 api_server.go:279] https://192.168.72.158:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0116 03:13:57.949575 1011955 api_server.go:103] status: https://192.168.72.158:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0116 03:13:57.949623 1011955 api_server.go:253] Checking apiserver healthz at https://192.168.72.158:8444/healthz ...
	I0116 03:13:57.965508 1011955 api_server.go:279] https://192.168.72.158:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0116 03:13:57.965553 1011955 api_server.go:103] status: https://192.168.72.158:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0116 03:13:58.449680 1011955 api_server.go:253] Checking apiserver healthz at https://192.168.72.158:8444/healthz ...
	I0116 03:13:58.456250 1011955 api_server.go:279] https://192.168.72.158:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0116 03:13:58.456292 1011955 api_server.go:103] status: https://192.168.72.158:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0116 03:13:58.950052 1011955 api_server.go:253] Checking apiserver healthz at https://192.168.72.158:8444/healthz ...
	I0116 03:13:58.962965 1011955 api_server.go:279] https://192.168.72.158:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0116 03:13:58.963019 1011955 api_server.go:103] status: https://192.168.72.158:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0116 03:13:59.449560 1011955 api_server.go:253] Checking apiserver healthz at https://192.168.72.158:8444/healthz ...
	I0116 03:13:59.457086 1011955 api_server.go:279] https://192.168.72.158:8444/healthz returned 200:
	ok
	I0116 03:13:59.469254 1011955 api_server.go:141] control plane version: v1.28.4
	I0116 03:13:59.469294 1011955 api_server.go:131] duration metric: took 5.520295477s to wait for apiserver health ...
	I0116 03:13:59.469308 1011955 cni.go:84] Creating CNI manager for ""
	I0116 03:13:59.469316 1011955 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 03:13:59.471524 1011955 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0116 03:13:56.143871 1011681 retry.go:31] will retry after 12.777471538s: kubelet not initialised
	I0116 03:13:59.472896 1011955 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0116 03:13:59.486944 1011955 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0116 03:13:59.511553 1011955 system_pods.go:43] waiting for kube-system pods to appear ...
	I0116 03:13:59.530287 1011955 system_pods.go:59] 8 kube-system pods found
	I0116 03:13:59.530357 1011955 system_pods.go:61] "coredns-5dd5756b68-z7b9d" [735c028e-f6a8-4a96-a615-95befe445a97] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0116 03:13:59.530374 1011955 system_pods.go:61] "etcd-default-k8s-diff-port-775571" [3e321076-74dd-49a8-b078-4f63505b5783] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0116 03:13:59.530391 1011955 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-775571" [07f01ea4-0317-4d3d-a03c-7c1756a5746c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0116 03:13:59.530409 1011955 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-775571" [5d4f4ee1-1f7c-4dfc-8c85-daca7a2d9fc0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0116 03:13:59.530428 1011955 system_pods.go:61] "kube-proxy-lntj2" [946acb12-217d-42e6-bcfc-37dca684b638] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0116 03:13:59.530437 1011955 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-775571" [6b278ad1-d59e-4b81-a4ec-cde1b643bb90] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0116 03:13:59.530449 1011955 system_pods.go:61] "metrics-server-57f55c9bc5-9bsqm" [ef0830b9-7e34-4aab-a1a6-8f91881b6934] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:13:59.530460 1011955 system_pods.go:61] "storage-provisioner" [8b20335e-7293-48bd-99f6-987cd95a0dc2] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0116 03:13:59.530474 1011955 system_pods.go:74] duration metric: took 18.829356ms to wait for pod list to return data ...
	I0116 03:13:59.530483 1011955 node_conditions.go:102] verifying NodePressure condition ...
	I0116 03:13:59.535596 1011955 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 03:13:59.535637 1011955 node_conditions.go:123] node cpu capacity is 2
	I0116 03:13:59.535651 1011955 node_conditions.go:105] duration metric: took 5.161567ms to run NodePressure ...
	I0116 03:13:59.535675 1011955 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:14:00.026516 1011955 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0116 03:14:00.035093 1011955 kubeadm.go:787] kubelet initialised
	I0116 03:14:00.035126 1011955 kubeadm.go:788] duration metric: took 8.522284ms waiting for restarted kubelet to initialise ...
	I0116 03:14:00.035137 1011955 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:14:00.067410 1011955 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-z7b9d" in "kube-system" namespace to be "Ready" ...
	I0116 03:13:58.094229 1011460 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (4.076000974s)
	I0116 03:13:58.094289 1011460 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (4.075931984s)
	I0116 03:13:58.094310 1011460 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2 (exists)
	I0116 03:13:58.094313 1011460 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17967-971255/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I0116 03:13:58.094331 1011460 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (4.007198419s)
	I0116 03:13:58.094353 1011460 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0116 03:13:58.094364 1011460 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0116 03:13:58.094367 1011460 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1: (4.007202527s)
	I0116 03:13:58.094384 1011460 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0116 03:13:58.094406 1011460 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (4.007194547s)
	I0116 03:13:58.094462 1011460 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2 (exists)
	I0116 03:13:58.094412 1011460 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0116 03:14:01.772635 1011460 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (3.678136161s)
	I0116 03:14:01.772673 1011460 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17967-971255/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 from cache
	I0116 03:14:01.772705 1011460 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0116 03:14:01.772758 1011460 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0116 03:14:00.463370 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:02.471583 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:02.075650 1011955 pod_ready.go:102] pod "coredns-5dd5756b68-z7b9d" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:04.077051 1011955 pod_ready.go:102] pod "coredns-5dd5756b68-z7b9d" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:04.575569 1011955 pod_ready.go:92] pod "coredns-5dd5756b68-z7b9d" in "kube-system" namespace has status "Ready":"True"
	I0116 03:14:04.575601 1011955 pod_ready.go:81] duration metric: took 4.508014187s waiting for pod "coredns-5dd5756b68-z7b9d" in "kube-system" namespace to be "Ready" ...
	I0116 03:14:04.575613 1011955 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-775571" in "kube-system" namespace to be "Ready" ...
	I0116 03:14:03.238654 1011460 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (1.465862156s)
	I0116 03:14:03.238716 1011460 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17967-971255/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 from cache
	I0116 03:14:03.238745 1011460 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0116 03:14:03.238799 1011460 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0116 03:14:05.517213 1011460 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (2.278362381s)
	I0116 03:14:05.517256 1011460 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17967-971255/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 from cache
	I0116 03:14:05.517290 1011460 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0116 03:14:05.517354 1011460 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0116 03:14:06.265419 1011460 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17967-971255/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0116 03:14:06.265468 1011460 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0116 03:14:06.265522 1011460 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0116 03:14:04.544905 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:06.964607 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:08.928050 1011681 retry.go:31] will retry after 7.799067246s: kubelet not initialised
	I0116 03:14:06.583214 1011955 pod_ready.go:102] pod "etcd-default-k8s-diff-port-775571" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:08.584517 1011955 pod_ready.go:102] pod "etcd-default-k8s-diff-port-775571" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:08.427431 1011460 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.161882333s)
	I0116 03:14:08.427460 1011460 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17967-971255/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0116 03:14:08.427485 1011460 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0116 03:14:08.427533 1011460 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0116 03:14:10.992767 1011460 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (2.565203793s)
	I0116 03:14:10.992809 1011460 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17967-971255/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 from cache
	I0116 03:14:10.992842 1011460 cache_images.go:123] Successfully loaded all cached images
	I0116 03:14:10.992849 1011460 cache_images.go:92] LoadImages completed in 17.624696262s
	I0116 03:14:10.992918 1011460 ssh_runner.go:195] Run: crio config
	I0116 03:14:11.057517 1011460 cni.go:84] Creating CNI manager for ""
	I0116 03:14:11.057552 1011460 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 03:14:11.057583 1011460 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0116 03:14:11.057614 1011460 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.29 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-934668 NodeName:no-preload-934668 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.29"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.29 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0116 03:14:11.057793 1011460 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.29
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-934668"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.29
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.29"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0116 03:14:11.057907 1011460 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-934668 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.29
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-934668 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0116 03:14:11.057969 1011460 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0116 03:14:11.070793 1011460 binaries.go:44] Found k8s binaries, skipping transfer
	I0116 03:14:11.070892 1011460 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0116 03:14:11.082832 1011460 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (381 bytes)
	I0116 03:14:11.103800 1011460 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0116 03:14:11.121508 1011460 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2106 bytes)
	I0116 03:14:11.139941 1011460 ssh_runner.go:195] Run: grep 192.168.50.29	control-plane.minikube.internal$ /etc/hosts
	I0116 03:14:11.144648 1011460 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.29	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 03:14:11.160034 1011460 certs.go:56] Setting up /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/no-preload-934668 for IP: 192.168.50.29
	I0116 03:14:11.160079 1011460 certs.go:190] acquiring lock for shared ca certs: {Name:mk7c5920652340a030ac1e36e0fe21cc8437c491 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:14:11.160310 1011460 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17967-971255/.minikube/ca.key
	I0116 03:14:11.160371 1011460 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17967-971255/.minikube/proxy-client-ca.key
	I0116 03:14:11.160469 1011460 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/no-preload-934668/client.key
	I0116 03:14:11.160562 1011460 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/no-preload-934668/apiserver.key.1326a2fe
	I0116 03:14:11.160631 1011460 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/no-preload-934668/proxy-client.key
	I0116 03:14:11.160780 1011460 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/home/jenkins/minikube-integration/17967-971255/.minikube/certs/978482.pem (1338 bytes)
	W0116 03:14:11.160861 1011460 certs.go:433] ignoring /home/jenkins/minikube-integration/17967-971255/.minikube/certs/home/jenkins/minikube-integration/17967-971255/.minikube/certs/978482_empty.pem, impossibly tiny 0 bytes
	I0116 03:14:11.160887 1011460 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca-key.pem (1675 bytes)
	I0116 03:14:11.160927 1011460 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca.pem (1082 bytes)
	I0116 03:14:11.160976 1011460 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/home/jenkins/minikube-integration/17967-971255/.minikube/certs/cert.pem (1123 bytes)
	I0116 03:14:11.161008 1011460 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/home/jenkins/minikube-integration/17967-971255/.minikube/certs/key.pem (1675 bytes)
	I0116 03:14:11.161070 1011460 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-971255/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17967-971255/.minikube/files/etc/ssl/certs/9784822.pem (1708 bytes)
	I0116 03:14:11.161922 1011460 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/no-preload-934668/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0116 03:14:11.192041 1011460 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/no-preload-934668/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0116 03:14:11.217326 1011460 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/no-preload-934668/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0116 03:14:11.243091 1011460 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/no-preload-934668/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0116 03:14:11.268536 1011460 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0116 03:14:11.291985 1011460 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0116 03:14:11.317943 1011460 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0116 03:14:11.343359 1011460 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0116 03:14:11.368837 1011460 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/files/etc/ssl/certs/9784822.pem --> /usr/share/ca-certificates/9784822.pem (1708 bytes)
	I0116 03:14:11.392907 1011460 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0116 03:14:11.417266 1011460 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/certs/978482.pem --> /usr/share/ca-certificates/978482.pem (1338 bytes)
	I0116 03:14:11.441365 1011460 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0116 03:14:11.459961 1011460 ssh_runner.go:195] Run: openssl version
	I0116 03:14:11.466850 1011460 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9784822.pem && ln -fs /usr/share/ca-certificates/9784822.pem /etc/ssl/certs/9784822.pem"
	I0116 03:14:11.477985 1011460 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9784822.pem
	I0116 03:14:11.483233 1011460 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 16 02:10 /usr/share/ca-certificates/9784822.pem
	I0116 03:14:11.483296 1011460 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9784822.pem
	I0116 03:14:11.489111 1011460 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9784822.pem /etc/ssl/certs/3ec20f2e.0"
	I0116 03:14:11.500499 1011460 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0116 03:14:11.511988 1011460 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:14:11.517205 1011460 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 16 02:01 /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:14:11.517300 1011460 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:14:11.523361 1011460 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0116 03:14:11.536305 1011460 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/978482.pem && ln -fs /usr/share/ca-certificates/978482.pem /etc/ssl/certs/978482.pem"
	I0116 03:14:11.549308 1011460 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/978482.pem
	I0116 03:14:11.554540 1011460 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 16 02:10 /usr/share/ca-certificates/978482.pem
	I0116 03:14:11.554632 1011460 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/978482.pem
	I0116 03:14:11.560816 1011460 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/978482.pem /etc/ssl/certs/51391683.0"
	I0116 03:14:11.573145 1011460 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0116 03:14:11.578678 1011460 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0116 03:14:11.586807 1011460 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0116 03:14:11.593146 1011460 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0116 03:14:11.599812 1011460 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0116 03:14:11.606216 1011460 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0116 03:14:11.612827 1011460 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0116 03:14:11.619060 1011460 kubeadm.go:404] StartCluster: {Name:no-preload-934668 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.29.0-rc.2 ClusterName:no-preload-934668 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.29 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 03:14:11.619201 1011460 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0116 03:14:11.619271 1011460 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 03:14:11.661293 1011460 cri.go:89] found id: ""
	I0116 03:14:11.661390 1011460 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0116 03:14:11.672886 1011460 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0116 03:14:11.672921 1011460 kubeadm.go:636] restartCluster start
	I0116 03:14:11.672998 1011460 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0116 03:14:11.683692 1011460 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:14:11.684896 1011460 kubeconfig.go:92] found "no-preload-934668" server: "https://192.168.50.29:8443"
	I0116 03:14:11.687623 1011460 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0116 03:14:11.698887 1011460 api_server.go:166] Checking apiserver status ...
	I0116 03:14:11.698967 1011460 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:14:11.711969 1011460 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:14:12.199181 1011460 api_server.go:166] Checking apiserver status ...
	I0116 03:14:12.199277 1011460 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:14:12.213324 1011460 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:14:09.463196 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:11.464458 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:13.466325 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:10.585205 1011955 pod_ready.go:102] pod "etcd-default-k8s-diff-port-775571" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:12.585027 1011955 pod_ready.go:92] pod "etcd-default-k8s-diff-port-775571" in "kube-system" namespace has status "Ready":"True"
	I0116 03:14:12.585060 1011955 pod_ready.go:81] duration metric: took 8.009439483s waiting for pod "etcd-default-k8s-diff-port-775571" in "kube-system" namespace to be "Ready" ...
	I0116 03:14:12.585074 1011955 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-775571" in "kube-system" namespace to be "Ready" ...
	I0116 03:14:12.592172 1011955 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-775571" in "kube-system" namespace has status "Ready":"True"
	I0116 03:14:12.592208 1011955 pod_ready.go:81] duration metric: took 7.125355ms waiting for pod "kube-apiserver-default-k8s-diff-port-775571" in "kube-system" namespace to be "Ready" ...
	I0116 03:14:12.592224 1011955 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-775571" in "kube-system" namespace to be "Ready" ...
	I0116 03:14:12.600113 1011955 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-775571" in "kube-system" namespace has status "Ready":"True"
	I0116 03:14:12.600141 1011955 pod_ready.go:81] duration metric: took 7.90138ms waiting for pod "kube-controller-manager-default-k8s-diff-port-775571" in "kube-system" namespace to be "Ready" ...
	I0116 03:14:12.600152 1011955 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-lntj2" in "kube-system" namespace to be "Ready" ...
	I0116 03:14:12.606813 1011955 pod_ready.go:92] pod "kube-proxy-lntj2" in "kube-system" namespace has status "Ready":"True"
	I0116 03:14:12.606843 1011955 pod_ready.go:81] duration metric: took 6.6848ms waiting for pod "kube-proxy-lntj2" in "kube-system" namespace to be "Ready" ...
	I0116 03:14:12.606852 1011955 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-775571" in "kube-system" namespace to be "Ready" ...
	I0116 03:14:14.115221 1011955 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-775571" in "kube-system" namespace has status "Ready":"True"
	I0116 03:14:14.115256 1011955 pod_ready.go:81] duration metric: took 1.508396572s waiting for pod "kube-scheduler-default-k8s-diff-port-775571" in "kube-system" namespace to be "Ready" ...
	I0116 03:14:14.115272 1011955 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace to be "Ready" ...
	I0116 03:14:12.699849 1011460 api_server.go:166] Checking apiserver status ...
	I0116 03:14:12.700002 1011460 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:14:12.713330 1011460 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:14:13.199827 1011460 api_server.go:166] Checking apiserver status ...
	I0116 03:14:13.199938 1011460 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:14:13.212593 1011460 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:14:13.699177 1011460 api_server.go:166] Checking apiserver status ...
	I0116 03:14:13.699280 1011460 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:14:13.713754 1011460 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:14:14.199293 1011460 api_server.go:166] Checking apiserver status ...
	I0116 03:14:14.199387 1011460 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:14:14.211364 1011460 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:14:14.699976 1011460 api_server.go:166] Checking apiserver status ...
	I0116 03:14:14.700082 1011460 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:14:14.713420 1011460 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:14:15.198943 1011460 api_server.go:166] Checking apiserver status ...
	I0116 03:14:15.199056 1011460 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:14:15.211474 1011460 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:14:15.699723 1011460 api_server.go:166] Checking apiserver status ...
	I0116 03:14:15.699858 1011460 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:14:15.711566 1011460 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:14:16.199077 1011460 api_server.go:166] Checking apiserver status ...
	I0116 03:14:16.199195 1011460 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:14:16.210174 1011460 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:14:16.699188 1011460 api_server.go:166] Checking apiserver status ...
	I0116 03:14:16.699296 1011460 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:14:16.710971 1011460 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:14:17.199584 1011460 api_server.go:166] Checking apiserver status ...
	I0116 03:14:17.199733 1011460 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:14:17.211935 1011460 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:14:15.964130 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:18.463789 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:16.731737 1011681 kubeadm.go:787] kubelet initialised
	I0116 03:14:16.731763 1011681 kubeadm.go:788] duration metric: took 34.810672543s waiting for restarted kubelet to initialise ...
	I0116 03:14:16.731771 1011681 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:14:16.736630 1011681 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-5j7ps" in "kube-system" namespace to be "Ready" ...
	I0116 03:14:16.742482 1011681 pod_ready.go:92] pod "coredns-5644d7b6d9-5j7ps" in "kube-system" namespace has status "Ready":"True"
	I0116 03:14:16.742513 1011681 pod_ready.go:81] duration metric: took 5.851753ms waiting for pod "coredns-5644d7b6d9-5j7ps" in "kube-system" namespace to be "Ready" ...
	I0116 03:14:16.742524 1011681 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-dfsf5" in "kube-system" namespace to be "Ready" ...
	I0116 03:14:16.747113 1011681 pod_ready.go:92] pod "coredns-5644d7b6d9-dfsf5" in "kube-system" namespace has status "Ready":"True"
	I0116 03:14:16.747137 1011681 pod_ready.go:81] duration metric: took 4.606585ms waiting for pod "coredns-5644d7b6d9-dfsf5" in "kube-system" namespace to be "Ready" ...
	I0116 03:14:16.747146 1011681 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-788237" in "kube-system" namespace to be "Ready" ...
	I0116 03:14:16.752744 1011681 pod_ready.go:92] pod "etcd-old-k8s-version-788237" in "kube-system" namespace has status "Ready":"True"
	I0116 03:14:16.752780 1011681 pod_ready.go:81] duration metric: took 5.626197ms waiting for pod "etcd-old-k8s-version-788237" in "kube-system" namespace to be "Ready" ...
	I0116 03:14:16.752794 1011681 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-788237" in "kube-system" namespace to be "Ready" ...
	I0116 03:14:16.757419 1011681 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-788237" in "kube-system" namespace has status "Ready":"True"
	I0116 03:14:16.757453 1011681 pod_ready.go:81] duration metric: took 4.649381ms waiting for pod "kube-apiserver-old-k8s-version-788237" in "kube-system" namespace to be "Ready" ...
	I0116 03:14:16.757468 1011681 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-788237" in "kube-system" namespace to be "Ready" ...
	I0116 03:14:17.131588 1011681 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-788237" in "kube-system" namespace has status "Ready":"True"
	I0116 03:14:17.131616 1011681 pod_ready.go:81] duration metric: took 374.139932ms waiting for pod "kube-controller-manager-old-k8s-version-788237" in "kube-system" namespace to be "Ready" ...
	I0116 03:14:17.131626 1011681 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-vtxjk" in "kube-system" namespace to be "Ready" ...
	I0116 03:14:17.531570 1011681 pod_ready.go:92] pod "kube-proxy-vtxjk" in "kube-system" namespace has status "Ready":"True"
	I0116 03:14:17.531610 1011681 pod_ready.go:81] duration metric: took 399.976074ms waiting for pod "kube-proxy-vtxjk" in "kube-system" namespace to be "Ready" ...
	I0116 03:14:17.531625 1011681 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-788237" in "kube-system" namespace to be "Ready" ...
	I0116 03:14:17.931792 1011681 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-788237" in "kube-system" namespace has status "Ready":"True"
	I0116 03:14:17.931820 1011681 pod_ready.go:81] duration metric: took 400.186985ms waiting for pod "kube-scheduler-old-k8s-version-788237" in "kube-system" namespace to be "Ready" ...
	I0116 03:14:17.931832 1011681 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace to be "Ready" ...
	I0116 03:14:19.939055 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:16.125560 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:18.624277 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:17.699246 1011460 api_server.go:166] Checking apiserver status ...
	I0116 03:14:17.699353 1011460 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:14:17.712025 1011460 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:14:18.199655 1011460 api_server.go:166] Checking apiserver status ...
	I0116 03:14:18.199784 1011460 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:14:18.212198 1011460 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:14:18.699816 1011460 api_server.go:166] Checking apiserver status ...
	I0116 03:14:18.699906 1011460 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:14:18.713019 1011460 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:14:19.199601 1011460 api_server.go:166] Checking apiserver status ...
	I0116 03:14:19.199706 1011460 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:14:19.211380 1011460 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:14:19.698919 1011460 api_server.go:166] Checking apiserver status ...
	I0116 03:14:19.699010 1011460 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:14:19.711001 1011460 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:14:20.199588 1011460 api_server.go:166] Checking apiserver status ...
	I0116 03:14:20.199694 1011460 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:14:20.211824 1011460 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:14:20.699345 1011460 api_server.go:166] Checking apiserver status ...
	I0116 03:14:20.699455 1011460 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:14:20.711489 1011460 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:14:21.199006 1011460 api_server.go:166] Checking apiserver status ...
	I0116 03:14:21.199111 1011460 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:14:21.210606 1011460 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:14:21.699928 1011460 api_server.go:166] Checking apiserver status ...
	I0116 03:14:21.700036 1011460 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:14:21.712086 1011460 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:14:21.712119 1011460 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0116 03:14:21.712128 1011460 kubeadm.go:1135] stopping kube-system containers ...
	I0116 03:14:21.712140 1011460 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0116 03:14:21.712220 1011460 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 03:14:21.754523 1011460 cri.go:89] found id: ""
	I0116 03:14:21.754644 1011460 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0116 03:14:21.770459 1011460 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0116 03:14:21.781022 1011460 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0116 03:14:21.781090 1011460 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0116 03:14:21.790780 1011460 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0116 03:14:21.790817 1011460 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:14:21.928434 1011460 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:14:20.962684 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:23.464521 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:21.941218 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:24.440549 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:21.123377 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:23.622729 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:22.965238 1011460 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.036762464s)
	I0116 03:14:22.965272 1011460 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:14:23.176590 1011460 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:14:23.273101 1011460 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:14:23.360976 1011460 api_server.go:52] waiting for apiserver process to appear ...
	I0116 03:14:23.361080 1011460 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:14:23.861957 1011460 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:14:24.361978 1011460 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:14:24.861204 1011460 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:14:25.361957 1011460 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:14:25.861277 1011460 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:14:25.884677 1011460 api_server.go:72] duration metric: took 2.523698355s to wait for apiserver process to appear ...
	I0116 03:14:25.884716 1011460 api_server.go:88] waiting for apiserver healthz status ...
	I0116 03:14:25.884742 1011460 api_server.go:253] Checking apiserver healthz at https://192.168.50.29:8443/healthz ...
	I0116 03:14:25.885342 1011460 api_server.go:269] stopped: https://192.168.50.29:8443/healthz: Get "https://192.168.50.29:8443/healthz": dial tcp 192.168.50.29:8443: connect: connection refused
	I0116 03:14:26.385713 1011460 api_server.go:253] Checking apiserver healthz at https://192.168.50.29:8443/healthz ...
	I0116 03:14:25.963386 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:28.463102 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:26.941545 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:29.439950 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:25.624030 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:27.624836 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:30.125387 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:30.121267 1011460 api_server.go:279] https://192.168.50.29:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0116 03:14:30.121300 1011460 api_server.go:103] status: https://192.168.50.29:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0116 03:14:30.121319 1011460 api_server.go:253] Checking apiserver healthz at https://192.168.50.29:8443/healthz ...
	I0116 03:14:30.224826 1011460 api_server.go:279] https://192.168.50.29:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0116 03:14:30.224860 1011460 api_server.go:103] status: https://192.168.50.29:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0116 03:14:30.385083 1011460 api_server.go:253] Checking apiserver healthz at https://192.168.50.29:8443/healthz ...
	I0116 03:14:30.392851 1011460 api_server.go:279] https://192.168.50.29:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0116 03:14:30.392896 1011460 api_server.go:103] status: https://192.168.50.29:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0116 03:14:30.885620 1011460 api_server.go:253] Checking apiserver healthz at https://192.168.50.29:8443/healthz ...
	I0116 03:14:30.891094 1011460 api_server.go:279] https://192.168.50.29:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0116 03:14:30.891136 1011460 api_server.go:103] status: https://192.168.50.29:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0116 03:14:31.385130 1011460 api_server.go:253] Checking apiserver healthz at https://192.168.50.29:8443/healthz ...
	I0116 03:14:31.399561 1011460 api_server.go:279] https://192.168.50.29:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0116 03:14:31.399594 1011460 api_server.go:103] status: https://192.168.50.29:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0116 03:14:31.885471 1011460 api_server.go:253] Checking apiserver healthz at https://192.168.50.29:8443/healthz ...
	I0116 03:14:31.890676 1011460 api_server.go:279] https://192.168.50.29:8443/healthz returned 200:
	ok
	I0116 03:14:31.900046 1011460 api_server.go:141] control plane version: v1.29.0-rc.2
	I0116 03:14:31.900079 1011460 api_server.go:131] duration metric: took 6.015355459s to wait for apiserver health ...
	I0116 03:14:31.900104 1011460 cni.go:84] Creating CNI manager for ""
	I0116 03:14:31.900111 1011460 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 03:14:31.902248 1011460 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0116 03:14:31.903832 1011460 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0116 03:14:31.920161 1011460 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0116 03:14:31.946401 1011460 system_pods.go:43] waiting for kube-system pods to appear ...
	I0116 03:14:31.957546 1011460 system_pods.go:59] 8 kube-system pods found
	I0116 03:14:31.957594 1011460 system_pods.go:61] "coredns-76f75df574-j55q6" [b8775751-87dd-4a05-8c84-05c09c947102] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0116 03:14:31.957605 1011460 system_pods.go:61] "etcd-no-preload-934668" [3ce80d11-c902-4c1d-9e2d-a65fed4d33c3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0116 03:14:31.957618 1011460 system_pods.go:61] "kube-apiserver-no-preload-934668" [3636a336-1ff1-4482-bf8c-559f8ae04f40] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0116 03:14:31.957627 1011460 system_pods.go:61] "kube-controller-manager-no-preload-934668" [71bdeebc-ac26-43ca-bffe-0e8e97293d5f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0116 03:14:31.957635 1011460 system_pods.go:61] "kube-proxy-c56bl" [d57e14d7-5e87-469f-8819-2749b2f7b54f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0116 03:14:31.957650 1011460 system_pods.go:61] "kube-scheduler-no-preload-934668" [10c61a29-dda4-4975-b290-a337e67070e0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0116 03:14:31.957665 1011460 system_pods.go:61] "metrics-server-57f55c9bc5-lgmnp" [36a9cbc0-7644-421c-ab26-7262a295ea66] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:14:31.957677 1011460 system_pods.go:61] "storage-provisioner" [c35e3af3-b48e-4184-8c06-2bd5bbbc399e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0116 03:14:31.957688 1011460 system_pods.go:74] duration metric: took 11.2629ms to wait for pod list to return data ...
	I0116 03:14:31.957703 1011460 node_conditions.go:102] verifying NodePressure condition ...
	I0116 03:14:31.963828 1011460 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 03:14:31.963860 1011460 node_conditions.go:123] node cpu capacity is 2
	I0116 03:14:31.963871 1011460 node_conditions.go:105] duration metric: took 6.162948ms to run NodePressure ...
	I0116 03:14:31.963894 1011460 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:14:32.261460 1011460 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0116 03:14:32.268148 1011460 kubeadm.go:787] kubelet initialised
	I0116 03:14:32.268181 1011460 kubeadm.go:788] duration metric: took 6.679075ms waiting for restarted kubelet to initialise ...
	I0116 03:14:32.268197 1011460 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:14:32.273936 1011460 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-j55q6" in "kube-system" namespace to be "Ready" ...
	I0116 03:14:30.468482 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:32.967755 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:31.940340 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:34.440944 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:32.624635 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:35.124816 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:34.282691 1011460 pod_ready.go:102] pod "coredns-76f75df574-j55q6" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:35.787066 1011460 pod_ready.go:92] pod "coredns-76f75df574-j55q6" in "kube-system" namespace has status "Ready":"True"
	I0116 03:14:35.787097 1011460 pod_ready.go:81] duration metric: took 3.513129426s waiting for pod "coredns-76f75df574-j55q6" in "kube-system" namespace to be "Ready" ...
	I0116 03:14:35.787112 1011460 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-934668" in "kube-system" namespace to be "Ready" ...
	I0116 03:14:35.463919 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:37.963533 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:36.939219 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:38.939377 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:37.128157 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:39.623730 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:37.798112 1011460 pod_ready.go:102] pod "etcd-no-preload-934668" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:39.794453 1011460 pod_ready.go:92] pod "etcd-no-preload-934668" in "kube-system" namespace has status "Ready":"True"
	I0116 03:14:39.794486 1011460 pod_ready.go:81] duration metric: took 4.007365728s waiting for pod "etcd-no-preload-934668" in "kube-system" namespace to be "Ready" ...
	I0116 03:14:39.794496 1011460 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-934668" in "kube-system" namespace to be "Ready" ...
	I0116 03:14:39.799569 1011460 pod_ready.go:92] pod "kube-apiserver-no-preload-934668" in "kube-system" namespace has status "Ready":"True"
	I0116 03:14:39.799593 1011460 pod_ready.go:81] duration metric: took 5.090956ms waiting for pod "kube-apiserver-no-preload-934668" in "kube-system" namespace to be "Ready" ...
	I0116 03:14:39.799602 1011460 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-934668" in "kube-system" namespace to be "Ready" ...
	I0116 03:14:40.309705 1011460 pod_ready.go:92] pod "kube-controller-manager-no-preload-934668" in "kube-system" namespace has status "Ready":"True"
	I0116 03:14:40.309748 1011460 pod_ready.go:81] duration metric: took 510.137584ms waiting for pod "kube-controller-manager-no-preload-934668" in "kube-system" namespace to be "Ready" ...
	I0116 03:14:40.309761 1011460 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-c56bl" in "kube-system" namespace to be "Ready" ...
	I0116 03:14:40.315446 1011460 pod_ready.go:92] pod "kube-proxy-c56bl" in "kube-system" namespace has status "Ready":"True"
	I0116 03:14:40.315480 1011460 pod_ready.go:81] duration metric: took 5.710622ms waiting for pod "kube-proxy-c56bl" in "kube-system" namespace to be "Ready" ...
	I0116 03:14:40.315494 1011460 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-934668" in "kube-system" namespace to be "Ready" ...
	I0116 03:14:40.467180 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:42.964593 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:40.940105 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:43.440135 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:41.623831 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:44.128608 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:42.324063 1011460 pod_ready.go:102] pod "kube-scheduler-no-preload-934668" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:44.325488 1011460 pod_ready.go:102] pod "kube-scheduler-no-preload-934668" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:44.823767 1011460 pod_ready.go:92] pod "kube-scheduler-no-preload-934668" in "kube-system" namespace has status "Ready":"True"
	I0116 03:14:44.823802 1011460 pod_ready.go:81] duration metric: took 4.508298497s waiting for pod "kube-scheduler-no-preload-934668" in "kube-system" namespace to be "Ready" ...
	I0116 03:14:44.823818 1011460 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace to be "Ready" ...
	I0116 03:14:46.834119 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:44.967470 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:47.467233 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:45.939182 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:48.439510 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:46.623093 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:48.623452 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:49.333255 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:51.334349 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:49.962021 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:51.964770 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:50.439867 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:52.938999 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:54.939661 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:50.624537 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:52.631432 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:55.124303 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:53.334508 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:55.832976 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:53.965445 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:56.462907 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:58.463527 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:57.438920 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:59.440238 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:57.621578 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:59.625435 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:14:58.332671 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:00.831831 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:00.465186 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:02.965629 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:01.440271 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:03.938665 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:02.124017 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:04.623475 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:03.334393 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:05.831665 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:05.463235 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:07.467282 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:05.939523 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:07.940337 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:07.122018 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:09.128032 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:08.331820 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:10.831910 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:09.963317 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:11.966051 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:10.439441 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:12.440308 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:14.940075 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:11.626866 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:14.122414 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:13.332152 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:15.831466 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:14.462126 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:16.465823 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:16.940118 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:19.440426 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:16.124215 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:18.624377 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:17.832950 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:20.329770 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:18.962537 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:20.966990 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:23.467331 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:21.939074 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:23.939905 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:21.122701 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:23.124103 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:25.137599 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:22.332462 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:24.832064 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:25.965556 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:28.467190 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:26.440039 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:28.940196 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:27.626127 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:29.626656 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:27.335063 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:29.834492 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:30.963079 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:33.462526 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:31.441125 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:33.939106 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:32.122443 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:34.123801 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:32.332153 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:34.832479 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:35.963546 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:37.964525 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:35.939539 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:38.439743 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:36.126074 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:38.623002 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:37.332835 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:39.832398 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:40.463769 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:42.962649 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:40.441879 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:42.939722 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:41.123840 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:43.625404 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:42.331290 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:44.831904 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:46.835841 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:44.964678 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:47.462896 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:45.439209 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:47.440145 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:49.939854 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:46.123807 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:48.126826 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:49.332005 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:51.332502 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:49.464762 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:51.964049 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:51.939904 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:54.439236 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:50.623153 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:52.624345 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:54.627203 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:53.831895 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:55.832232 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:54.463030 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:56.963946 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:56.439394 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:58.939030 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:56.627957 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:59.123599 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:58.332413 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:00.332637 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:15:59.463703 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:01.964436 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:00.941424 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:03.439546 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:01.123729 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:03.124738 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:02.832493 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:04.832547 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:04.463420 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:06.463569 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:05.941019 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:07.944737 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:05.624443 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:08.122957 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:07.333014 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:09.832431 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:11.834194 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:08.963205 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:10.963471 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:13.463710 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:10.439631 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:12.940212 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:10.622909 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:12.627122 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:15.122958 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:14.332800 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:16.831137 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:15.466395 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:17.962126 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:15.440905 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:17.939481 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:19.939923 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:17.624106 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:19.624608 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:18.832920 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:20.833205 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:19.963345 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:22.464212 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:21.941453 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:24.440153 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:22.122244 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:24.123259 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:23.331669 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:25.331743 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:24.963259 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:26.963490 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:26.442666 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:28.939968 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:26.123378 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:28.125204 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:27.332247 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:29.831956 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:28.963524 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:30.964135 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:33.462993 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:31.439282 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:33.439561 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:30.623257 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:33.123409 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:32.330980 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:34.332254 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:36.332346 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:35.463102 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:37.466011 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:35.441431 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:37.938841 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:39.939708 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:35.622848 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:37.623714 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:39.624018 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:38.333242 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:40.333759 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:39.961985 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:41.963743 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:41.940877 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:44.439855 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:42.123548 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:44.123765 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:42.831179 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:44.832125 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:46.832823 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:44.464876 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:46.963061 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:46.940520 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:49.438035 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:46.622349 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:48.626247 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:49.331443 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:51.832493 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:48.963476 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:50.963937 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:53.463054 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:51.439462 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:53.938617 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:51.124901 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:53.621994 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:53.834097 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:56.331556 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:55.464589 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:57.465198 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:55.939032 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:57.939901 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:59.940433 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:55.623283 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:58.123546 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:58.831287 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:00.833045 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:16:59.963001 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:02.464145 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:02.438594 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:04.439026 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:00.623369 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:03.122925 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:03.336121 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:05.832499 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:04.962987 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:06.963706 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:06.439557 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:08.440103 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:05.623650 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:08.123661 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:08.333356 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:10.832246 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:09.462321 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:11.464231 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:10.440612 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:12.939770 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:10.622705 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:13.123057 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:15.123165 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:13.330980 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:15.331911 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:13.963350 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:15.965533 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:18.464316 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:15.439711 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:17.940475 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:19.940957 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:17.124102 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:19.124940 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:17.334609 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:19.832181 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:21.834883 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:20.468955 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:22.964039 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:22.441403 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:24.938835 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:21.624672 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:24.121761 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:24.332265 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:26.332655 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:25.463695 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:27.963694 1011501 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:27.963726 1011501 pod_ready.go:81] duration metric: took 4m0.008813288s waiting for pod "metrics-server-57f55c9bc5-7d2fh" in "kube-system" namespace to be "Ready" ...
	E0116 03:17:27.963735 1011501 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0116 03:17:27.963742 1011501 pod_ready.go:38] duration metric: took 4m3.208815045s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:17:27.963758 1011501 api_server.go:52] waiting for apiserver process to appear ...
	I0116 03:17:27.963814 1011501 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0116 03:17:27.963886 1011501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0116 03:17:28.018667 1011501 cri.go:89] found id: "42d452ff0268fa05f5c5b20a7332caca85b7aea961642568ec84158f105b568f"
	I0116 03:17:28.018693 1011501 cri.go:89] found id: ""
	I0116 03:17:28.018701 1011501 logs.go:284] 1 containers: [42d452ff0268fa05f5c5b20a7332caca85b7aea961642568ec84158f105b568f]
	I0116 03:17:28.018769 1011501 ssh_runner.go:195] Run: which crictl
	I0116 03:17:28.023716 1011501 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0116 03:17:28.023802 1011501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0116 03:17:28.076139 1011501 cri.go:89] found id: "36288d0c42d12139e9355a5d562d600f6065e59336e28b57558b0bf5ea3f0618"
	I0116 03:17:28.076173 1011501 cri.go:89] found id: ""
	I0116 03:17:28.076182 1011501 logs.go:284] 1 containers: [36288d0c42d12139e9355a5d562d600f6065e59336e28b57558b0bf5ea3f0618]
	I0116 03:17:28.076233 1011501 ssh_runner.go:195] Run: which crictl
	I0116 03:17:28.080954 1011501 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0116 03:17:28.081020 1011501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0116 03:17:28.126518 1011501 cri.go:89] found id: "2cc211416aab65f634f78efecbcf662c33e971f6b6f1e0fb77492c1ef8e2cf70"
	I0116 03:17:28.126544 1011501 cri.go:89] found id: ""
	I0116 03:17:28.126552 1011501 logs.go:284] 1 containers: [2cc211416aab65f634f78efecbcf662c33e971f6b6f1e0fb77492c1ef8e2cf70]
	I0116 03:17:28.126611 1011501 ssh_runner.go:195] Run: which crictl
	I0116 03:17:28.131611 1011501 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0116 03:17:28.131692 1011501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0116 03:17:28.204571 1011501 cri.go:89] found id: "ab456031061355e2ec86ee36ff52b4245b5ccd2fd1f87da0318e9bbd4ca512e8"
	I0116 03:17:28.204604 1011501 cri.go:89] found id: ""
	I0116 03:17:28.204612 1011501 logs.go:284] 1 containers: [ab456031061355e2ec86ee36ff52b4245b5ccd2fd1f87da0318e9bbd4ca512e8]
	I0116 03:17:28.204672 1011501 ssh_runner.go:195] Run: which crictl
	I0116 03:17:28.210340 1011501 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0116 03:17:28.210415 1011501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0116 03:17:28.262556 1011501 cri.go:89] found id: "da3ca3a9cda0a5d7ff284cd8e9e069e0ef17c913570e52561f8c7cb8be285047"
	I0116 03:17:28.262587 1011501 cri.go:89] found id: ""
	I0116 03:17:28.262598 1011501 logs.go:284] 1 containers: [da3ca3a9cda0a5d7ff284cd8e9e069e0ef17c913570e52561f8c7cb8be285047]
	I0116 03:17:28.262666 1011501 ssh_runner.go:195] Run: which crictl
	I0116 03:17:28.267670 1011501 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0116 03:17:28.267763 1011501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0116 03:17:28.312958 1011501 cri.go:89] found id: "f75f023773154fd3722c861e7494aad7dd9f361e17a059735fef935507a94994"
	I0116 03:17:28.312982 1011501 cri.go:89] found id: ""
	I0116 03:17:28.312990 1011501 logs.go:284] 1 containers: [f75f023773154fd3722c861e7494aad7dd9f361e17a059735fef935507a94994]
	I0116 03:17:28.313040 1011501 ssh_runner.go:195] Run: which crictl
	I0116 03:17:28.317874 1011501 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0116 03:17:28.317951 1011501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0116 03:17:28.363140 1011501 cri.go:89] found id: ""
	I0116 03:17:28.363172 1011501 logs.go:284] 0 containers: []
	W0116 03:17:28.363181 1011501 logs.go:286] No container was found matching "kindnet"
	I0116 03:17:28.363188 1011501 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0116 03:17:28.363245 1011501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0116 03:17:28.408300 1011501 cri.go:89] found id: "0f37f0f7c73398c72353bd7fa20c656f6c90dffa7c4d73d01f9c6d8804319657"
	I0116 03:17:28.408330 1011501 cri.go:89] found id: "653a87cc5b4e5bb52f728e222fc7a58f19453b0263453d6c259e81a206ffac76"
	I0116 03:17:28.408335 1011501 cri.go:89] found id: ""
	I0116 03:17:28.408342 1011501 logs.go:284] 2 containers: [0f37f0f7c73398c72353bd7fa20c656f6c90dffa7c4d73d01f9c6d8804319657 653a87cc5b4e5bb52f728e222fc7a58f19453b0263453d6c259e81a206ffac76]
	I0116 03:17:28.408406 1011501 ssh_runner.go:195] Run: which crictl
	I0116 03:17:28.413146 1011501 ssh_runner.go:195] Run: which crictl
	I0116 03:17:28.418553 1011501 logs.go:123] Gathering logs for kube-scheduler [ab456031061355e2ec86ee36ff52b4245b5ccd2fd1f87da0318e9bbd4ca512e8] ...
	I0116 03:17:28.418588 1011501 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ab456031061355e2ec86ee36ff52b4245b5ccd2fd1f87da0318e9bbd4ca512e8"
	I0116 03:17:28.466255 1011501 logs.go:123] Gathering logs for kube-proxy [da3ca3a9cda0a5d7ff284cd8e9e069e0ef17c913570e52561f8c7cb8be285047] ...
	I0116 03:17:28.466305 1011501 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 da3ca3a9cda0a5d7ff284cd8e9e069e0ef17c913570e52561f8c7cb8be285047"
	I0116 03:17:28.511913 1011501 logs.go:123] Gathering logs for storage-provisioner [0f37f0f7c73398c72353bd7fa20c656f6c90dffa7c4d73d01f9c6d8804319657] ...
	I0116 03:17:28.511954 1011501 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f37f0f7c73398c72353bd7fa20c656f6c90dffa7c4d73d01f9c6d8804319657"
	I0116 03:17:28.551053 1011501 logs.go:123] Gathering logs for dmesg ...
	I0116 03:17:28.551093 1011501 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0116 03:17:28.571627 1011501 logs.go:123] Gathering logs for kube-controller-manager [f75f023773154fd3722c861e7494aad7dd9f361e17a059735fef935507a94994] ...
	I0116 03:17:28.571663 1011501 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f75f023773154fd3722c861e7494aad7dd9f361e17a059735fef935507a94994"
	I0116 03:17:28.631193 1011501 logs.go:123] Gathering logs for storage-provisioner [653a87cc5b4e5bb52f728e222fc7a58f19453b0263453d6c259e81a206ffac76] ...
	I0116 03:17:28.631236 1011501 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 653a87cc5b4e5bb52f728e222fc7a58f19453b0263453d6c259e81a206ffac76"
	I0116 03:17:28.671010 1011501 logs.go:123] Gathering logs for CRI-O ...
	I0116 03:17:28.671047 1011501 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0116 03:17:26.940503 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:28.941291 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:26.123594 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:28.124053 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:28.341231 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:30.831479 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:29.167771 1011501 logs.go:123] Gathering logs for describe nodes ...
	I0116 03:17:29.167828 1011501 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0116 03:17:29.340535 1011501 logs.go:123] Gathering logs for etcd [36288d0c42d12139e9355a5d562d600f6065e59336e28b57558b0bf5ea3f0618] ...
	I0116 03:17:29.340574 1011501 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 36288d0c42d12139e9355a5d562d600f6065e59336e28b57558b0bf5ea3f0618"
	I0116 03:17:29.397815 1011501 logs.go:123] Gathering logs for container status ...
	I0116 03:17:29.397861 1011501 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0116 03:17:29.459355 1011501 logs.go:123] Gathering logs for kubelet ...
	I0116 03:17:29.459408 1011501 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0116 03:17:29.519244 1011501 logs.go:123] Gathering logs for kube-apiserver [42d452ff0268fa05f5c5b20a7332caca85b7aea961642568ec84158f105b568f] ...
	I0116 03:17:29.519289 1011501 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 42d452ff0268fa05f5c5b20a7332caca85b7aea961642568ec84158f105b568f"
	I0116 03:17:29.577686 1011501 logs.go:123] Gathering logs for coredns [2cc211416aab65f634f78efecbcf662c33e971f6b6f1e0fb77492c1ef8e2cf70] ...
	I0116 03:17:29.577736 1011501 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2cc211416aab65f634f78efecbcf662c33e971f6b6f1e0fb77492c1ef8e2cf70"
	I0116 03:17:32.124219 1011501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:17:32.141191 1011501 api_server.go:72] duration metric: took 4m13.431910425s to wait for apiserver process to appear ...
	I0116 03:17:32.141224 1011501 api_server.go:88] waiting for apiserver healthz status ...
	I0116 03:17:32.141316 1011501 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0116 03:17:32.141397 1011501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0116 03:17:32.182105 1011501 cri.go:89] found id: "42d452ff0268fa05f5c5b20a7332caca85b7aea961642568ec84158f105b568f"
	I0116 03:17:32.182133 1011501 cri.go:89] found id: ""
	I0116 03:17:32.182142 1011501 logs.go:284] 1 containers: [42d452ff0268fa05f5c5b20a7332caca85b7aea961642568ec84158f105b568f]
	I0116 03:17:32.182200 1011501 ssh_runner.go:195] Run: which crictl
	I0116 03:17:32.186819 1011501 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0116 03:17:32.186900 1011501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0116 03:17:32.234240 1011501 cri.go:89] found id: "36288d0c42d12139e9355a5d562d600f6065e59336e28b57558b0bf5ea3f0618"
	I0116 03:17:32.234282 1011501 cri.go:89] found id: ""
	I0116 03:17:32.234294 1011501 logs.go:284] 1 containers: [36288d0c42d12139e9355a5d562d600f6065e59336e28b57558b0bf5ea3f0618]
	I0116 03:17:32.234366 1011501 ssh_runner.go:195] Run: which crictl
	I0116 03:17:32.240481 1011501 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0116 03:17:32.240550 1011501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0116 03:17:32.284981 1011501 cri.go:89] found id: "2cc211416aab65f634f78efecbcf662c33e971f6b6f1e0fb77492c1ef8e2cf70"
	I0116 03:17:32.285016 1011501 cri.go:89] found id: ""
	I0116 03:17:32.285028 1011501 logs.go:284] 1 containers: [2cc211416aab65f634f78efecbcf662c33e971f6b6f1e0fb77492c1ef8e2cf70]
	I0116 03:17:32.285095 1011501 ssh_runner.go:195] Run: which crictl
	I0116 03:17:32.289894 1011501 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0116 03:17:32.289985 1011501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0116 03:17:32.331520 1011501 cri.go:89] found id: "ab456031061355e2ec86ee36ff52b4245b5ccd2fd1f87da0318e9bbd4ca512e8"
	I0116 03:17:32.331555 1011501 cri.go:89] found id: ""
	I0116 03:17:32.331567 1011501 logs.go:284] 1 containers: [ab456031061355e2ec86ee36ff52b4245b5ccd2fd1f87da0318e9bbd4ca512e8]
	I0116 03:17:32.331646 1011501 ssh_runner.go:195] Run: which crictl
	I0116 03:17:32.336053 1011501 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0116 03:17:32.336131 1011501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0116 03:17:32.383199 1011501 cri.go:89] found id: "da3ca3a9cda0a5d7ff284cd8e9e069e0ef17c913570e52561f8c7cb8be285047"
	I0116 03:17:32.383233 1011501 cri.go:89] found id: ""
	I0116 03:17:32.383253 1011501 logs.go:284] 1 containers: [da3ca3a9cda0a5d7ff284cd8e9e069e0ef17c913570e52561f8c7cb8be285047]
	I0116 03:17:32.383324 1011501 ssh_runner.go:195] Run: which crictl
	I0116 03:17:32.388197 1011501 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0116 03:17:32.388278 1011501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0116 03:17:32.435679 1011501 cri.go:89] found id: "f75f023773154fd3722c861e7494aad7dd9f361e17a059735fef935507a94994"
	I0116 03:17:32.435711 1011501 cri.go:89] found id: ""
	I0116 03:17:32.435722 1011501 logs.go:284] 1 containers: [f75f023773154fd3722c861e7494aad7dd9f361e17a059735fef935507a94994]
	I0116 03:17:32.435795 1011501 ssh_runner.go:195] Run: which crictl
	I0116 03:17:32.441503 1011501 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0116 03:17:32.441578 1011501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0116 03:17:32.484750 1011501 cri.go:89] found id: ""
	I0116 03:17:32.484783 1011501 logs.go:284] 0 containers: []
	W0116 03:17:32.484794 1011501 logs.go:286] No container was found matching "kindnet"
	I0116 03:17:32.484803 1011501 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0116 03:17:32.484872 1011501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0116 03:17:32.534967 1011501 cri.go:89] found id: "0f37f0f7c73398c72353bd7fa20c656f6c90dffa7c4d73d01f9c6d8804319657"
	I0116 03:17:32.534996 1011501 cri.go:89] found id: "653a87cc5b4e5bb52f728e222fc7a58f19453b0263453d6c259e81a206ffac76"
	I0116 03:17:32.535002 1011501 cri.go:89] found id: ""
	I0116 03:17:32.535011 1011501 logs.go:284] 2 containers: [0f37f0f7c73398c72353bd7fa20c656f6c90dffa7c4d73d01f9c6d8804319657 653a87cc5b4e5bb52f728e222fc7a58f19453b0263453d6c259e81a206ffac76]
	I0116 03:17:32.535079 1011501 ssh_runner.go:195] Run: which crictl
	I0116 03:17:32.539828 1011501 ssh_runner.go:195] Run: which crictl
	I0116 03:17:32.544640 1011501 logs.go:123] Gathering logs for describe nodes ...
	I0116 03:17:32.544670 1011501 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0116 03:17:32.681760 1011501 logs.go:123] Gathering logs for kube-apiserver [42d452ff0268fa05f5c5b20a7332caca85b7aea961642568ec84158f105b568f] ...
	I0116 03:17:32.681831 1011501 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 42d452ff0268fa05f5c5b20a7332caca85b7aea961642568ec84158f105b568f"
	I0116 03:17:32.741557 1011501 logs.go:123] Gathering logs for container status ...
	I0116 03:17:32.741606 1011501 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0116 03:17:32.791811 1011501 logs.go:123] Gathering logs for CRI-O ...
	I0116 03:17:32.791857 1011501 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0116 03:17:33.242377 1011501 logs.go:123] Gathering logs for etcd [36288d0c42d12139e9355a5d562d600f6065e59336e28b57558b0bf5ea3f0618] ...
	I0116 03:17:33.242424 1011501 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 36288d0c42d12139e9355a5d562d600f6065e59336e28b57558b0bf5ea3f0618"
	I0116 03:17:33.303162 1011501 logs.go:123] Gathering logs for kube-proxy [da3ca3a9cda0a5d7ff284cd8e9e069e0ef17c913570e52561f8c7cb8be285047] ...
	I0116 03:17:33.303211 1011501 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 da3ca3a9cda0a5d7ff284cd8e9e069e0ef17c913570e52561f8c7cb8be285047"
	I0116 03:17:33.346935 1011501 logs.go:123] Gathering logs for storage-provisioner [653a87cc5b4e5bb52f728e222fc7a58f19453b0263453d6c259e81a206ffac76] ...
	I0116 03:17:33.346975 1011501 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 653a87cc5b4e5bb52f728e222fc7a58f19453b0263453d6c259e81a206ffac76"
	I0116 03:17:33.393563 1011501 logs.go:123] Gathering logs for kube-controller-manager [f75f023773154fd3722c861e7494aad7dd9f361e17a059735fef935507a94994] ...
	I0116 03:17:33.393603 1011501 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f75f023773154fd3722c861e7494aad7dd9f361e17a059735fef935507a94994"
	I0116 03:17:33.453859 1011501 logs.go:123] Gathering logs for storage-provisioner [0f37f0f7c73398c72353bd7fa20c656f6c90dffa7c4d73d01f9c6d8804319657] ...
	I0116 03:17:33.453902 1011501 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f37f0f7c73398c72353bd7fa20c656f6c90dffa7c4d73d01f9c6d8804319657"
	I0116 03:17:33.492763 1011501 logs.go:123] Gathering logs for kubelet ...
	I0116 03:17:33.492797 1011501 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0116 03:17:33.555700 1011501 logs.go:123] Gathering logs for coredns [2cc211416aab65f634f78efecbcf662c33e971f6b6f1e0fb77492c1ef8e2cf70] ...
	I0116 03:17:33.555742 1011501 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2cc211416aab65f634f78efecbcf662c33e971f6b6f1e0fb77492c1ef8e2cf70"
	I0116 03:17:33.601049 1011501 logs.go:123] Gathering logs for kube-scheduler [ab456031061355e2ec86ee36ff52b4245b5ccd2fd1f87da0318e9bbd4ca512e8] ...
	I0116 03:17:33.601084 1011501 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ab456031061355e2ec86ee36ff52b4245b5ccd2fd1f87da0318e9bbd4ca512e8"
	I0116 03:17:33.652000 1011501 logs.go:123] Gathering logs for dmesg ...
	I0116 03:17:33.652035 1011501 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0116 03:17:31.438487 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:33.440493 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:30.621532 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:32.622315 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:34.622840 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:32.832920 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:35.331711 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:36.168102 1011501 api_server.go:253] Checking apiserver healthz at https://192.168.61.150:8443/healthz ...
	I0116 03:17:36.173921 1011501 api_server.go:279] https://192.168.61.150:8443/healthz returned 200:
	ok
	I0116 03:17:36.175763 1011501 api_server.go:141] control plane version: v1.28.4
	I0116 03:17:36.175789 1011501 api_server.go:131] duration metric: took 4.034557823s to wait for apiserver health ...
	I0116 03:17:36.175798 1011501 system_pods.go:43] waiting for kube-system pods to appear ...
	I0116 03:17:36.175826 1011501 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0116 03:17:36.175890 1011501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0116 03:17:36.224810 1011501 cri.go:89] found id: "42d452ff0268fa05f5c5b20a7332caca85b7aea961642568ec84158f105b568f"
	I0116 03:17:36.224847 1011501 cri.go:89] found id: ""
	I0116 03:17:36.224859 1011501 logs.go:284] 1 containers: [42d452ff0268fa05f5c5b20a7332caca85b7aea961642568ec84158f105b568f]
	I0116 03:17:36.224925 1011501 ssh_runner.go:195] Run: which crictl
	I0116 03:17:36.229177 1011501 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0116 03:17:36.229255 1011501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0116 03:17:36.271241 1011501 cri.go:89] found id: "36288d0c42d12139e9355a5d562d600f6065e59336e28b57558b0bf5ea3f0618"
	I0116 03:17:36.271272 1011501 cri.go:89] found id: ""
	I0116 03:17:36.271281 1011501 logs.go:284] 1 containers: [36288d0c42d12139e9355a5d562d600f6065e59336e28b57558b0bf5ea3f0618]
	I0116 03:17:36.271342 1011501 ssh_runner.go:195] Run: which crictl
	I0116 03:17:36.275772 1011501 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0116 03:17:36.275846 1011501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0116 03:17:36.319867 1011501 cri.go:89] found id: "2cc211416aab65f634f78efecbcf662c33e971f6b6f1e0fb77492c1ef8e2cf70"
	I0116 03:17:36.319899 1011501 cri.go:89] found id: ""
	I0116 03:17:36.319909 1011501 logs.go:284] 1 containers: [2cc211416aab65f634f78efecbcf662c33e971f6b6f1e0fb77492c1ef8e2cf70]
	I0116 03:17:36.319977 1011501 ssh_runner.go:195] Run: which crictl
	I0116 03:17:36.324329 1011501 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0116 03:17:36.324410 1011501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0116 03:17:36.363526 1011501 cri.go:89] found id: "ab456031061355e2ec86ee36ff52b4245b5ccd2fd1f87da0318e9bbd4ca512e8"
	I0116 03:17:36.363551 1011501 cri.go:89] found id: ""
	I0116 03:17:36.363559 1011501 logs.go:284] 1 containers: [ab456031061355e2ec86ee36ff52b4245b5ccd2fd1f87da0318e9bbd4ca512e8]
	I0116 03:17:36.363614 1011501 ssh_runner.go:195] Run: which crictl
	I0116 03:17:36.367896 1011501 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0116 03:17:36.367974 1011501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0116 03:17:36.408601 1011501 cri.go:89] found id: "da3ca3a9cda0a5d7ff284cd8e9e069e0ef17c913570e52561f8c7cb8be285047"
	I0116 03:17:36.408642 1011501 cri.go:89] found id: ""
	I0116 03:17:36.408657 1011501 logs.go:284] 1 containers: [da3ca3a9cda0a5d7ff284cd8e9e069e0ef17c913570e52561f8c7cb8be285047]
	I0116 03:17:36.408715 1011501 ssh_runner.go:195] Run: which crictl
	I0116 03:17:36.413041 1011501 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0116 03:17:36.413111 1011501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0116 03:17:36.460091 1011501 cri.go:89] found id: "f75f023773154fd3722c861e7494aad7dd9f361e17a059735fef935507a94994"
	I0116 03:17:36.460117 1011501 cri.go:89] found id: ""
	I0116 03:17:36.460126 1011501 logs.go:284] 1 containers: [f75f023773154fd3722c861e7494aad7dd9f361e17a059735fef935507a94994]
	I0116 03:17:36.460201 1011501 ssh_runner.go:195] Run: which crictl
	I0116 03:17:36.464375 1011501 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0116 03:17:36.464457 1011501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0116 03:17:36.501943 1011501 cri.go:89] found id: ""
	I0116 03:17:36.501969 1011501 logs.go:284] 0 containers: []
	W0116 03:17:36.501977 1011501 logs.go:286] No container was found matching "kindnet"
	I0116 03:17:36.501984 1011501 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0116 03:17:36.502037 1011501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0116 03:17:36.550841 1011501 cri.go:89] found id: "0f37f0f7c73398c72353bd7fa20c656f6c90dffa7c4d73d01f9c6d8804319657"
	I0116 03:17:36.550874 1011501 cri.go:89] found id: "653a87cc5b4e5bb52f728e222fc7a58f19453b0263453d6c259e81a206ffac76"
	I0116 03:17:36.550882 1011501 cri.go:89] found id: ""
	I0116 03:17:36.550892 1011501 logs.go:284] 2 containers: [0f37f0f7c73398c72353bd7fa20c656f6c90dffa7c4d73d01f9c6d8804319657 653a87cc5b4e5bb52f728e222fc7a58f19453b0263453d6c259e81a206ffac76]
	I0116 03:17:36.550976 1011501 ssh_runner.go:195] Run: which crictl
	I0116 03:17:36.555728 1011501 ssh_runner.go:195] Run: which crictl
	I0116 03:17:36.560058 1011501 logs.go:123] Gathering logs for kube-controller-manager [f75f023773154fd3722c861e7494aad7dd9f361e17a059735fef935507a94994] ...
	I0116 03:17:36.560087 1011501 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f75f023773154fd3722c861e7494aad7dd9f361e17a059735fef935507a94994"
	I0116 03:17:36.618163 1011501 logs.go:123] Gathering logs for kubelet ...
	I0116 03:17:36.618208 1011501 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0116 03:17:36.673167 1011501 logs.go:123] Gathering logs for dmesg ...
	I0116 03:17:36.673216 1011501 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0116 03:17:36.690061 1011501 logs.go:123] Gathering logs for storage-provisioner [0f37f0f7c73398c72353bd7fa20c656f6c90dffa7c4d73d01f9c6d8804319657] ...
	I0116 03:17:36.690099 1011501 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f37f0f7c73398c72353bd7fa20c656f6c90dffa7c4d73d01f9c6d8804319657"
	I0116 03:17:36.732953 1011501 logs.go:123] Gathering logs for CRI-O ...
	I0116 03:17:36.733013 1011501 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0116 03:17:37.127465 1011501 logs.go:123] Gathering logs for container status ...
	I0116 03:17:37.127504 1011501 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0116 03:17:37.176618 1011501 logs.go:123] Gathering logs for coredns [2cc211416aab65f634f78efecbcf662c33e971f6b6f1e0fb77492c1ef8e2cf70] ...
	I0116 03:17:37.176660 1011501 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2cc211416aab65f634f78efecbcf662c33e971f6b6f1e0fb77492c1ef8e2cf70"
	I0116 03:17:37.223851 1011501 logs.go:123] Gathering logs for kube-scheduler [ab456031061355e2ec86ee36ff52b4245b5ccd2fd1f87da0318e9bbd4ca512e8] ...
	I0116 03:17:37.223895 1011501 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ab456031061355e2ec86ee36ff52b4245b5ccd2fd1f87da0318e9bbd4ca512e8"
	I0116 03:17:37.265502 1011501 logs.go:123] Gathering logs for kube-proxy [da3ca3a9cda0a5d7ff284cd8e9e069e0ef17c913570e52561f8c7cb8be285047] ...
	I0116 03:17:37.265542 1011501 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 da3ca3a9cda0a5d7ff284cd8e9e069e0ef17c913570e52561f8c7cb8be285047"
	I0116 03:17:37.323107 1011501 logs.go:123] Gathering logs for storage-provisioner [653a87cc5b4e5bb52f728e222fc7a58f19453b0263453d6c259e81a206ffac76] ...
	I0116 03:17:37.323140 1011501 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 653a87cc5b4e5bb52f728e222fc7a58f19453b0263453d6c259e81a206ffac76"
	I0116 03:17:37.368305 1011501 logs.go:123] Gathering logs for describe nodes ...
	I0116 03:17:37.368348 1011501 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0116 03:17:37.519310 1011501 logs.go:123] Gathering logs for kube-apiserver [42d452ff0268fa05f5c5b20a7332caca85b7aea961642568ec84158f105b568f] ...
	I0116 03:17:37.519352 1011501 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 42d452ff0268fa05f5c5b20a7332caca85b7aea961642568ec84158f105b568f"
	I0116 03:17:37.580961 1011501 logs.go:123] Gathering logs for etcd [36288d0c42d12139e9355a5d562d600f6065e59336e28b57558b0bf5ea3f0618] ...
	I0116 03:17:37.581000 1011501 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 36288d0c42d12139e9355a5d562d600f6065e59336e28b57558b0bf5ea3f0618"
	I0116 03:17:35.940233 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:38.439452 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:40.146809 1011501 system_pods.go:59] 8 kube-system pods found
	I0116 03:17:40.146843 1011501 system_pods.go:61] "coredns-5dd5756b68-stqh5" [adbcef96-218b-42ed-9daf-72c274be0690] Running
	I0116 03:17:40.146849 1011501 system_pods.go:61] "etcd-embed-certs-480663" [6694af11-3b1a-4b84-adcb-3416b87a076f] Running
	I0116 03:17:40.146853 1011501 system_pods.go:61] "kube-apiserver-embed-certs-480663" [3a3d0e5d-35eb-4f2f-9686-b17af19bc777] Running
	I0116 03:17:40.146857 1011501 system_pods.go:61] "kube-controller-manager-embed-certs-480663" [729b671f-23b9-409b-9f11-d6992f2355c7] Running
	I0116 03:17:40.146861 1011501 system_pods.go:61] "kube-proxy-j4786" [aabb98a7-fe55-4105-a5d2-c1e312464107] Running
	I0116 03:17:40.146865 1011501 system_pods.go:61] "kube-scheduler-embed-certs-480663" [11baa5d7-6e7b-4a25-be6f-e7975b51023d] Running
	I0116 03:17:40.146872 1011501 system_pods.go:61] "metrics-server-57f55c9bc5-7d2fh" [512cf579-f335-4995-8721-74bb84da776e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:17:40.146877 1011501 system_pods.go:61] "storage-provisioner" [da59ff59-869f-48a9-a5c5-c95bb807cbcf] Running
	I0116 03:17:40.146887 1011501 system_pods.go:74] duration metric: took 3.971081813s to wait for pod list to return data ...
	I0116 03:17:40.146900 1011501 default_sa.go:34] waiting for default service account to be created ...
	I0116 03:17:40.149755 1011501 default_sa.go:45] found service account: "default"
	I0116 03:17:40.149786 1011501 default_sa.go:55] duration metric: took 2.87163ms for default service account to be created ...
	I0116 03:17:40.149798 1011501 system_pods.go:116] waiting for k8s-apps to be running ...
	I0116 03:17:40.156300 1011501 system_pods.go:86] 8 kube-system pods found
	I0116 03:17:40.156327 1011501 system_pods.go:89] "coredns-5dd5756b68-stqh5" [adbcef96-218b-42ed-9daf-72c274be0690] Running
	I0116 03:17:40.156333 1011501 system_pods.go:89] "etcd-embed-certs-480663" [6694af11-3b1a-4b84-adcb-3416b87a076f] Running
	I0116 03:17:40.156337 1011501 system_pods.go:89] "kube-apiserver-embed-certs-480663" [3a3d0e5d-35eb-4f2f-9686-b17af19bc777] Running
	I0116 03:17:40.156341 1011501 system_pods.go:89] "kube-controller-manager-embed-certs-480663" [729b671f-23b9-409b-9f11-d6992f2355c7] Running
	I0116 03:17:40.156345 1011501 system_pods.go:89] "kube-proxy-j4786" [aabb98a7-fe55-4105-a5d2-c1e312464107] Running
	I0116 03:17:40.156349 1011501 system_pods.go:89] "kube-scheduler-embed-certs-480663" [11baa5d7-6e7b-4a25-be6f-e7975b51023d] Running
	I0116 03:17:40.156355 1011501 system_pods.go:89] "metrics-server-57f55c9bc5-7d2fh" [512cf579-f335-4995-8721-74bb84da776e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:17:40.156360 1011501 system_pods.go:89] "storage-provisioner" [da59ff59-869f-48a9-a5c5-c95bb807cbcf] Running
	I0116 03:17:40.156367 1011501 system_pods.go:126] duration metric: took 6.548782ms to wait for k8s-apps to be running ...
	I0116 03:17:40.156374 1011501 system_svc.go:44] waiting for kubelet service to be running ....
	I0116 03:17:40.156421 1011501 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 03:17:40.173539 1011501 system_svc.go:56] duration metric: took 17.152768ms WaitForService to wait for kubelet.
	I0116 03:17:40.173574 1011501 kubeadm.go:581] duration metric: took 4m21.464303041s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0116 03:17:40.173623 1011501 node_conditions.go:102] verifying NodePressure condition ...
	I0116 03:17:40.177277 1011501 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 03:17:40.177309 1011501 node_conditions.go:123] node cpu capacity is 2
	I0116 03:17:40.177324 1011501 node_conditions.go:105] duration metric: took 3.695642ms to run NodePressure ...
	I0116 03:17:40.177336 1011501 start.go:228] waiting for startup goroutines ...
	I0116 03:17:40.177342 1011501 start.go:233] waiting for cluster config update ...
	I0116 03:17:40.177353 1011501 start.go:242] writing updated cluster config ...
	I0116 03:17:40.177673 1011501 ssh_runner.go:195] Run: rm -f paused
	I0116 03:17:40.237611 1011501 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I0116 03:17:40.239605 1011501 out.go:177] * Done! kubectl is now configured to use "embed-certs-480663" cluster and "default" namespace by default
	I0116 03:17:36.624876 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:39.123549 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:37.332861 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:39.832707 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:40.440194 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:42.939505 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:41.123729 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:43.124392 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:42.335659 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:44.833290 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:45.438892 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:47.439827 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:49.440946 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:45.622763 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:47.623098 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:49.623524 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:47.331849 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:49.832349 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:51.938022 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:53.939098 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:52.122851 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:54.123517 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:52.333667 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:54.832564 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:55.939981 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:57.941055 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:56.623347 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:59.123492 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:57.332003 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:17:59.332838 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:18:01.333665 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:18:00.440795 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:18:02.939475 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:18:01.623191 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:18:03.623475 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:18:03.831584 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:18:05.832669 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:18:05.438818 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:18:07.940446 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:18:06.125503 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:18:08.624414 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:18:07.832961 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:18:10.332435 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:18:10.439517 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:18:12.938184 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:18:14.939116 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:18:10.626134 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:18:13.123124 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace has status "Ready":"False"
	I0116 03:18:14.116258 1011955 pod_ready.go:81] duration metric: took 4m0.000962112s waiting for pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace to be "Ready" ...
	E0116 03:18:14.116292 1011955 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-9bsqm" in "kube-system" namespace to be "Ready" (will not retry!)
	I0116 03:18:14.116325 1011955 pod_ready.go:38] duration metric: took 4m14.081176627s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:18:14.116391 1011955 kubeadm.go:640] restartCluster took 4m34.84299912s
	W0116 03:18:14.116515 1011955 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0116 03:18:14.116555 1011955 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0116 03:18:12.832787 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:18:14.833104 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:18:16.833154 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:18:16.939522 1011681 pod_ready.go:102] pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace has status "Ready":"False"
	I0116 03:18:17.932247 1011681 pod_ready.go:81] duration metric: took 4m0.000397189s waiting for pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace to be "Ready" ...
	E0116 03:18:17.932288 1011681 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-74d5856cc6-tgxzb" in "kube-system" namespace to be "Ready" (will not retry!)
	I0116 03:18:17.932314 1011681 pod_ready.go:38] duration metric: took 4m1.200532474s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:18:17.932356 1011681 kubeadm.go:640] restartCluster took 4m59.25901651s
	W0116 03:18:17.932448 1011681 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0116 03:18:17.932484 1011681 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0116 03:18:19.332379 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:18:21.332813 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:18:24.791837 1011681 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (6.859306364s)
	I0116 03:18:24.791938 1011681 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 03:18:24.810486 1011681 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0116 03:18:24.822414 1011681 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0116 03:18:24.834751 1011681 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0116 03:18:24.834814 1011681 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0116 03:18:25.070509 1011681 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0116 03:18:23.832402 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:18:25.834563 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:18:28.584480 1011955 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (14.467896175s)
	I0116 03:18:28.584554 1011955 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 03:18:28.602324 1011955 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0116 03:18:28.614934 1011955 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0116 03:18:28.624508 1011955 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0116 03:18:28.624564 1011955 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0116 03:18:28.679880 1011955 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0116 03:18:28.679970 1011955 kubeadm.go:322] [preflight] Running pre-flight checks
	I0116 03:18:28.862872 1011955 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0116 03:18:28.862987 1011955 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0116 03:18:28.863151 1011955 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0116 03:18:29.129842 1011955 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0116 03:18:29.131728 1011955 out.go:204]   - Generating certificates and keys ...
	I0116 03:18:29.131835 1011955 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0116 03:18:29.131918 1011955 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0116 03:18:29.132072 1011955 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0116 03:18:29.132174 1011955 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0116 03:18:29.132294 1011955 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0116 03:18:29.132393 1011955 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0116 03:18:29.132472 1011955 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0116 03:18:29.132553 1011955 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0116 03:18:29.132646 1011955 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0116 03:18:29.132781 1011955 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0116 03:18:29.132867 1011955 kubeadm.go:322] [certs] Using the existing "sa" key
	I0116 03:18:29.132972 1011955 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0116 03:18:29.254715 1011955 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0116 03:18:29.440667 1011955 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0116 03:18:29.640243 1011955 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0116 03:18:29.792291 1011955 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0116 03:18:29.793072 1011955 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0116 03:18:29.799431 1011955 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0116 03:18:29.801398 1011955 out.go:204]   - Booting up control plane ...
	I0116 03:18:29.801516 1011955 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0116 03:18:29.801601 1011955 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0116 03:18:29.801686 1011955 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0116 03:18:29.820061 1011955 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0116 03:18:29.823043 1011955 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0116 03:18:29.823191 1011955 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0116 03:18:29.951227 1011955 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0116 03:18:27.835298 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:18:30.331925 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:18:32.332063 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:18:34.333064 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:18:36.833631 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:18:38.602437 1011681 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0116 03:18:38.602518 1011681 kubeadm.go:322] [preflight] Running pre-flight checks
	I0116 03:18:38.602608 1011681 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0116 03:18:38.602737 1011681 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0116 03:18:38.602861 1011681 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0116 03:18:38.602991 1011681 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0116 03:18:38.603089 1011681 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0116 03:18:38.603148 1011681 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0116 03:18:38.603223 1011681 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0116 03:18:38.604856 1011681 out.go:204]   - Generating certificates and keys ...
	I0116 03:18:38.604966 1011681 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0116 03:18:38.605046 1011681 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0116 03:18:38.605139 1011681 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0116 03:18:38.605222 1011681 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0116 03:18:38.605299 1011681 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0116 03:18:38.605359 1011681 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0116 03:18:38.605446 1011681 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0116 03:18:38.605510 1011681 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0116 03:18:38.605570 1011681 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0116 03:18:38.605629 1011681 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0116 03:18:38.605662 1011681 kubeadm.go:322] [certs] Using the existing "sa" key
	I0116 03:18:38.605707 1011681 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0116 03:18:38.605749 1011681 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0116 03:18:38.605792 1011681 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0116 03:18:38.605878 1011681 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0116 03:18:38.605964 1011681 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0116 03:18:38.606070 1011681 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0116 03:18:38.608024 1011681 out.go:204]   - Booting up control plane ...
	I0116 03:18:38.608146 1011681 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0116 03:18:38.608263 1011681 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0116 03:18:38.608375 1011681 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0116 03:18:38.608508 1011681 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0116 03:18:38.608676 1011681 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0116 03:18:38.608755 1011681 kubeadm.go:322] [apiclient] All control plane components are healthy after 10.506014 seconds
	I0116 03:18:38.608891 1011681 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0116 03:18:38.609075 1011681 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I0116 03:18:38.609173 1011681 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0116 03:18:38.609358 1011681 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-788237 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0116 03:18:38.609437 1011681 kubeadm.go:322] [bootstrap-token] Using token: ou2w4b.xm5ff9ai4zzr80lg
	I0116 03:18:38.611110 1011681 out.go:204]   - Configuring RBAC rules ...
	I0116 03:18:38.611236 1011681 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0116 03:18:38.611429 1011681 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0116 03:18:38.611590 1011681 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0116 03:18:38.611730 1011681 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0116 03:18:38.611834 1011681 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0116 03:18:38.611886 1011681 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0116 03:18:38.611942 1011681 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0116 03:18:38.611948 1011681 kubeadm.go:322] 
	I0116 03:18:38.612019 1011681 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0116 03:18:38.612024 1011681 kubeadm.go:322] 
	I0116 03:18:38.612116 1011681 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0116 03:18:38.612122 1011681 kubeadm.go:322] 
	I0116 03:18:38.612153 1011681 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0116 03:18:38.612235 1011681 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0116 03:18:38.612296 1011681 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0116 03:18:38.612302 1011681 kubeadm.go:322] 
	I0116 03:18:38.612363 1011681 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0116 03:18:38.612452 1011681 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0116 03:18:38.612535 1011681 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0116 03:18:38.612541 1011681 kubeadm.go:322] 
	I0116 03:18:38.612641 1011681 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I0116 03:18:38.612732 1011681 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0116 03:18:38.612738 1011681 kubeadm.go:322] 
	I0116 03:18:38.612838 1011681 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token ou2w4b.xm5ff9ai4zzr80lg \
	I0116 03:18:38.612975 1011681 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:2b2d63eda0b757db09586d44c0338fc83976b062d99727312f950d3846e4844b \
	I0116 03:18:38.613007 1011681 kubeadm.go:322]     --control-plane 	  
	I0116 03:18:38.613013 1011681 kubeadm.go:322] 
	I0116 03:18:38.613115 1011681 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0116 03:18:38.613122 1011681 kubeadm.go:322] 
	I0116 03:18:38.613224 1011681 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token ou2w4b.xm5ff9ai4zzr80lg \
	I0116 03:18:38.613366 1011681 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:2b2d63eda0b757db09586d44c0338fc83976b062d99727312f950d3846e4844b 
	I0116 03:18:38.613378 1011681 cni.go:84] Creating CNI manager for ""
	I0116 03:18:38.613386 1011681 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 03:18:38.615140 1011681 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0116 03:18:38.454228 1011955 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.502851 seconds
	I0116 03:18:38.454363 1011955 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0116 03:18:38.474581 1011955 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0116 03:18:39.018312 1011955 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0116 03:18:39.018620 1011955 kubeadm.go:322] [mark-control-plane] Marking the node default-k8s-diff-port-775571 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0116 03:18:39.535782 1011955 kubeadm.go:322] [bootstrap-token] Using token: 8fntor.yrfb8kfaxajcp5qt
	I0116 03:18:39.537357 1011955 out.go:204]   - Configuring RBAC rules ...
	I0116 03:18:39.537505 1011955 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0116 03:18:39.552902 1011955 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0116 03:18:39.571482 1011955 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0116 03:18:39.575866 1011955 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0116 03:18:39.581062 1011955 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0116 03:18:39.586833 1011955 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0116 03:18:39.619342 1011955 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0116 03:18:39.888315 1011955 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0116 03:18:39.966804 1011955 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0116 03:18:39.971287 1011955 kubeadm.go:322] 
	I0116 03:18:39.971371 1011955 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0116 03:18:39.971383 1011955 kubeadm.go:322] 
	I0116 03:18:39.971472 1011955 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0116 03:18:39.971482 1011955 kubeadm.go:322] 
	I0116 03:18:39.971556 1011955 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0116 03:18:39.971657 1011955 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0116 03:18:39.971750 1011955 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0116 03:18:39.971761 1011955 kubeadm.go:322] 
	I0116 03:18:39.971835 1011955 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0116 03:18:39.971846 1011955 kubeadm.go:322] 
	I0116 03:18:39.971927 1011955 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0116 03:18:39.971941 1011955 kubeadm.go:322] 
	I0116 03:18:39.971984 1011955 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0116 03:18:39.972080 1011955 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0116 03:18:39.972187 1011955 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0116 03:18:39.972199 1011955 kubeadm.go:322] 
	I0116 03:18:39.972317 1011955 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0116 03:18:39.972431 1011955 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0116 03:18:39.972450 1011955 kubeadm.go:322] 
	I0116 03:18:39.972580 1011955 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8444 --token 8fntor.yrfb8kfaxajcp5qt \
	I0116 03:18:39.972743 1011955 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:2b2d63eda0b757db09586d44c0338fc83976b062d99727312f950d3846e4844b \
	I0116 03:18:39.972782 1011955 kubeadm.go:322] 	--control-plane 
	I0116 03:18:39.972805 1011955 kubeadm.go:322] 
	I0116 03:18:39.972924 1011955 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0116 03:18:39.972942 1011955 kubeadm.go:322] 
	I0116 03:18:39.973047 1011955 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8444 --token 8fntor.yrfb8kfaxajcp5qt \
	I0116 03:18:39.973210 1011955 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:2b2d63eda0b757db09586d44c0338fc83976b062d99727312f950d3846e4844b 
	I0116 03:18:39.974532 1011955 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0116 03:18:39.974577 1011955 cni.go:84] Creating CNI manager for ""
	I0116 03:18:39.974604 1011955 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 03:18:39.976623 1011955 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0116 03:18:38.616520 1011681 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0116 03:18:38.639990 1011681 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0116 03:18:38.666967 1011681 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0116 03:18:38.667168 1011681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:38.667280 1011681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=880ec6d67bbc7fe8882aaa6543d5a07427c973fd minikube.k8s.io/name=old-k8s-version-788237 minikube.k8s.io/updated_at=2024_01_16T03_18_38_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:38.688522 1011681 ops.go:34] apiserver oom_adj: -16
	I0116 03:18:38.976096 1011681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:39.476978 1011681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:39.976086 1011681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:39.977876 1011955 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0116 03:18:40.005273 1011955 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0116 03:18:40.087713 1011955 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0116 03:18:40.087863 1011955 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:40.087863 1011955 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=880ec6d67bbc7fe8882aaa6543d5a07427c973fd minikube.k8s.io/name=default-k8s-diff-port-775571 minikube.k8s.io/updated_at=2024_01_16T03_18_40_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:40.168057 1011955 ops.go:34] apiserver oom_adj: -16
	I0116 03:18:40.492375 1011955 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:39.331115 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:18:41.332298 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:18:40.476064 1011681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:40.977085 1011681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:41.476706 1011681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:41.976429 1011681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:42.476172 1011681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:42.976176 1011681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:43.476449 1011681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:43.977056 1011681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:44.476761 1011681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:44.976151 1011681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:40.992990 1011955 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:41.492564 1011955 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:41.992578 1011955 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:42.493062 1011955 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:42.993372 1011955 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:43.493473 1011955 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:43.993319 1011955 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:44.493019 1011955 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:44.993411 1011955 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:45.492880 1011955 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:43.832198 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace has status "Ready":"False"
	I0116 03:18:44.824162 1011460 pod_ready.go:81] duration metric: took 4m0.000326915s waiting for pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace to be "Ready" ...
	E0116 03:18:44.824195 1011460 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-lgmnp" in "kube-system" namespace to be "Ready" (will not retry!)
	I0116 03:18:44.824281 1011460 pod_ready.go:38] duration metric: took 4m12.556069814s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:18:44.824351 1011460 kubeadm.go:640] restartCluster took 4m33.151422709s
	W0116 03:18:44.824438 1011460 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0116 03:18:44.824479 1011460 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0116 03:18:45.476629 1011681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:45.977106 1011681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:46.476146 1011681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:46.977113 1011681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:47.476693 1011681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:47.976945 1011681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:48.477170 1011681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:48.976394 1011681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:49.476848 1011681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:49.976797 1011681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:45.993346 1011955 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:46.493256 1011955 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:46.993006 1011955 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:47.492403 1011955 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:47.992813 1011955 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:48.493940 1011955 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:48.992944 1011955 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:49.493490 1011955 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:49.993389 1011955 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:50.492678 1011955 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:50.992627 1011955 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:51.493472 1011955 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:51.993052 1011955 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:52.492430 1011955 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:52.646080 1011955 kubeadm.go:1088] duration metric: took 12.558292993s to wait for elevateKubeSystemPrivileges.
	I0116 03:18:52.646138 1011955 kubeadm.go:406] StartCluster complete in 5m13.439862133s
	I0116 03:18:52.646169 1011955 settings.go:142] acquiring lock: {Name:mk39c69fb5454efd5afab021c02c3bec1f1b4e6b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:18:52.646281 1011955 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17967-971255/kubeconfig
	I0116 03:18:52.648500 1011955 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17967-971255/kubeconfig: {Name:mk5ca3018274b0a03599cf3631785c0629f81a67 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:18:52.648860 1011955 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0116 03:18:52.648869 1011955 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0116 03:18:52.648980 1011955 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-775571"
	I0116 03:18:52.649003 1011955 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-775571"
	I0116 03:18:52.649005 1011955 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-775571"
	I0116 03:18:52.649029 1011955 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-775571"
	I0116 03:18:52.649034 1011955 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-775571"
	W0116 03:18:52.649043 1011955 addons.go:243] addon metrics-server should already be in state true
	I0116 03:18:52.649114 1011955 config.go:182] Loaded profile config "default-k8s-diff-port-775571": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 03:18:52.649008 1011955 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-775571"
	I0116 03:18:52.649130 1011955 host.go:66] Checking if "default-k8s-diff-port-775571" exists ...
	W0116 03:18:52.649149 1011955 addons.go:243] addon storage-provisioner should already be in state true
	I0116 03:18:52.649212 1011955 host.go:66] Checking if "default-k8s-diff-port-775571" exists ...
	I0116 03:18:52.649529 1011955 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:18:52.649563 1011955 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:18:52.649529 1011955 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:18:52.649613 1011955 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:18:52.649660 1011955 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:18:52.649697 1011955 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:18:52.666073 1011955 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40865
	I0116 03:18:52.666727 1011955 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:18:52.666879 1011955 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41643
	I0116 03:18:52.667406 1011955 main.go:141] libmachine: Using API Version  1
	I0116 03:18:52.667435 1011955 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:18:52.667447 1011955 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:18:52.667814 1011955 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:18:52.667985 1011955 main.go:141] libmachine: Using API Version  1
	I0116 03:18:52.668015 1011955 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:18:52.668030 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetState
	I0116 03:18:52.668373 1011955 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:18:52.668745 1011955 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33307
	I0116 03:18:52.668995 1011955 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:18:52.669057 1011955 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:18:52.669205 1011955 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:18:52.669742 1011955 main.go:141] libmachine: Using API Version  1
	I0116 03:18:52.669767 1011955 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:18:52.670181 1011955 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:18:52.670725 1011955 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:18:52.670760 1011955 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:18:52.672109 1011955 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-775571"
	W0116 03:18:52.672134 1011955 addons.go:243] addon default-storageclass should already be in state true
	I0116 03:18:52.672165 1011955 host.go:66] Checking if "default-k8s-diff-port-775571" exists ...
	I0116 03:18:52.672575 1011955 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:18:52.672630 1011955 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:18:52.687775 1011955 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41925
	I0116 03:18:52.689625 1011955 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37799
	I0116 03:18:52.689778 1011955 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:18:52.690073 1011955 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:18:52.690203 1011955 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41865
	I0116 03:18:52.690460 1011955 main.go:141] libmachine: Using API Version  1
	I0116 03:18:52.690473 1011955 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:18:52.690742 1011955 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:18:52.690859 1011955 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:18:52.691055 1011955 main.go:141] libmachine: Using API Version  1
	I0116 03:18:52.691067 1011955 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:18:52.691409 1011955 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:18:52.691627 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetState
	I0116 03:18:52.692030 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetState
	I0116 03:18:52.693938 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .DriverName
	I0116 03:18:52.696389 1011955 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0116 03:18:52.694587 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .DriverName
	I0116 03:18:52.694891 1011955 main.go:141] libmachine: Using API Version  1
	I0116 03:18:52.698046 1011955 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:18:52.698164 1011955 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0116 03:18:52.698189 1011955 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0116 03:18:52.698218 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetSSHHostname
	I0116 03:18:52.700172 1011955 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 03:18:52.701996 1011955 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 03:18:52.702018 1011955 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0116 03:18:52.702043 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetSSHHostname
	I0116 03:18:52.702058 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | domain default-k8s-diff-port-775571 has defined MAC address 52:54:00:4b:bc:45 in network mk-default-k8s-diff-port-775571
	I0116 03:18:52.699885 1011955 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:18:52.702560 1011955 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:18:52.702602 1011955 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:18:52.702805 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:bc:45", ip: ""} in network mk-default-k8s-diff-port-775571: {Iface:virbr4 ExpiryTime:2024-01-16 04:13:23 +0000 UTC Type:0 Mac:52:54:00:4b:bc:45 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-775571 Clientid:01:52:54:00:4b:bc:45}
	I0116 03:18:52.702820 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | domain default-k8s-diff-port-775571 has defined IP address 192.168.72.158 and MAC address 52:54:00:4b:bc:45 in network mk-default-k8s-diff-port-775571
	I0116 03:18:52.702870 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetSSHPort
	I0116 03:18:52.703094 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetSSHKeyPath
	I0116 03:18:52.703363 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetSSHUsername
	I0116 03:18:52.703544 1011955 sshutil.go:53] new ssh client: &{IP:192.168.72.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/default-k8s-diff-port-775571/id_rsa Username:docker}
	I0116 03:18:52.705663 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | domain default-k8s-diff-port-775571 has defined MAC address 52:54:00:4b:bc:45 in network mk-default-k8s-diff-port-775571
	I0116 03:18:52.706131 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:bc:45", ip: ""} in network mk-default-k8s-diff-port-775571: {Iface:virbr4 ExpiryTime:2024-01-16 04:13:23 +0000 UTC Type:0 Mac:52:54:00:4b:bc:45 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-775571 Clientid:01:52:54:00:4b:bc:45}
	I0116 03:18:52.706164 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | domain default-k8s-diff-port-775571 has defined IP address 192.168.72.158 and MAC address 52:54:00:4b:bc:45 in network mk-default-k8s-diff-port-775571
	I0116 03:18:52.706417 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetSSHPort
	I0116 03:18:52.706587 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetSSHKeyPath
	I0116 03:18:52.706758 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetSSHUsername
	I0116 03:18:52.706916 1011955 sshutil.go:53] new ssh client: &{IP:192.168.72.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/default-k8s-diff-port-775571/id_rsa Username:docker}
	I0116 03:18:52.725464 1011955 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39049
	I0116 03:18:52.726113 1011955 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:18:52.726781 1011955 main.go:141] libmachine: Using API Version  1
	I0116 03:18:52.726824 1011955 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:18:52.727253 1011955 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:18:52.727482 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetState
	I0116 03:18:52.729485 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .DriverName
	I0116 03:18:52.729789 1011955 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0116 03:18:52.729823 1011955 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0116 03:18:52.729848 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetSSHHostname
	I0116 03:18:52.732669 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | domain default-k8s-diff-port-775571 has defined MAC address 52:54:00:4b:bc:45 in network mk-default-k8s-diff-port-775571
	I0116 03:18:52.733121 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:bc:45", ip: ""} in network mk-default-k8s-diff-port-775571: {Iface:virbr4 ExpiryTime:2024-01-16 04:13:23 +0000 UTC Type:0 Mac:52:54:00:4b:bc:45 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-775571 Clientid:01:52:54:00:4b:bc:45}
	I0116 03:18:52.733142 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | domain default-k8s-diff-port-775571 has defined IP address 192.168.72.158 and MAC address 52:54:00:4b:bc:45 in network mk-default-k8s-diff-port-775571
	I0116 03:18:52.733351 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetSSHPort
	I0116 03:18:52.733557 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetSSHKeyPath
	I0116 03:18:52.733766 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .GetSSHUsername
	I0116 03:18:52.733963 1011955 sshutil.go:53] new ssh client: &{IP:192.168.72.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/default-k8s-diff-port-775571/id_rsa Username:docker}
	I0116 03:18:52.873193 1011955 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0116 03:18:52.909098 1011955 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0116 03:18:52.909141 1011955 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0116 03:18:52.941709 1011955 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 03:18:52.942443 1011955 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0116 03:18:52.966702 1011955 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0116 03:18:52.966736 1011955 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0116 03:18:53.020737 1011955 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0116 03:18:53.020823 1011955 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0116 03:18:53.066186 1011955 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0116 03:18:53.170342 1011955 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-775571" context rescaled to 1 replicas
	I0116 03:18:53.170433 1011955 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.158 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0116 03:18:53.172678 1011955 out.go:177] * Verifying Kubernetes components...
	I0116 03:18:50.476090 1011681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:50.976173 1011681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:51.476673 1011681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:51.976165 1011681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:52.476238 1011681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:52.976850 1011681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:53.476943 1011681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:18:53.686011 1011681 kubeadm.go:1088] duration metric: took 15.018895956s to wait for elevateKubeSystemPrivileges.
	I0116 03:18:53.686052 1011681 kubeadm.go:406] StartCluster complete in 5m35.06362605s
	I0116 03:18:53.686080 1011681 settings.go:142] acquiring lock: {Name:mk39c69fb5454efd5afab021c02c3bec1f1b4e6b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:18:53.686180 1011681 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17967-971255/kubeconfig
	I0116 03:18:53.688860 1011681 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17967-971255/kubeconfig: {Name:mk5ca3018274b0a03599cf3631785c0629f81a67 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:18:53.689175 1011681 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0116 03:18:53.689247 1011681 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0116 03:18:53.689333 1011681 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-788237"
	I0116 03:18:53.689349 1011681 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-788237"
	I0116 03:18:53.689364 1011681 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-788237"
	I0116 03:18:53.689377 1011681 addons.go:234] Setting addon metrics-server=true in "old-k8s-version-788237"
	W0116 03:18:53.689389 1011681 addons.go:243] addon metrics-server should already be in state true
	I0116 03:18:53.689436 1011681 config.go:182] Loaded profile config "old-k8s-version-788237": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0116 03:18:53.689455 1011681 host.go:66] Checking if "old-k8s-version-788237" exists ...
	I0116 03:18:53.689378 1011681 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-788237"
	I0116 03:18:53.689357 1011681 addons.go:234] Setting addon storage-provisioner=true in "old-k8s-version-788237"
	W0116 03:18:53.689599 1011681 addons.go:243] addon storage-provisioner should already be in state true
	I0116 03:18:53.689645 1011681 host.go:66] Checking if "old-k8s-version-788237" exists ...
	I0116 03:18:53.689901 1011681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:18:53.689924 1011681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:18:53.689924 1011681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:18:53.689950 1011681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:18:53.690144 1011681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:18:53.690180 1011681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:18:53.711157 1011681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37709
	I0116 03:18:53.713950 1011681 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:18:53.714211 1011681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38563
	I0116 03:18:53.714552 1011681 main.go:141] libmachine: Using API Version  1
	I0116 03:18:53.714576 1011681 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:18:53.714663 1011681 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:18:53.715012 1011681 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:18:53.715181 1011681 main.go:141] libmachine: Using API Version  1
	I0116 03:18:53.715199 1011681 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:18:53.715683 1011681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:18:53.715710 1011681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:18:53.716263 1011681 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:18:53.716605 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetState
	I0116 03:18:53.720570 1011681 addons.go:234] Setting addon default-storageclass=true in "old-k8s-version-788237"
	W0116 03:18:53.720598 1011681 addons.go:243] addon default-storageclass should already be in state true
	I0116 03:18:53.720630 1011681 host.go:66] Checking if "old-k8s-version-788237" exists ...
	I0116 03:18:53.721140 1011681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:18:53.721183 1011681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:18:53.724181 1011681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40403
	I0116 03:18:53.724763 1011681 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:18:53.725334 1011681 main.go:141] libmachine: Using API Version  1
	I0116 03:18:53.725364 1011681 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:18:53.725737 1011681 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:18:53.726313 1011681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:18:53.726362 1011681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:18:53.737615 1011681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46761
	I0116 03:18:53.738167 1011681 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:18:53.738714 1011681 main.go:141] libmachine: Using API Version  1
	I0116 03:18:53.738739 1011681 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:18:53.739154 1011681 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:18:53.739431 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetState
	I0116 03:18:53.741559 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .DriverName
	I0116 03:18:53.741765 1011681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41413
	I0116 03:18:53.744019 1011681 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 03:18:53.745656 1011681 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 03:18:53.745691 1011681 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0116 03:18:53.745718 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetSSHHostname
	I0116 03:18:53.745868 1011681 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:18:53.746513 1011681 main.go:141] libmachine: Using API Version  1
	I0116 03:18:53.746535 1011681 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:18:53.746969 1011681 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:18:53.747587 1011681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:18:53.747621 1011681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:18:53.749923 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | domain old-k8s-version-788237 has defined MAC address 52:54:00:64:b7:2e in network mk-old-k8s-version-788237
	I0116 03:18:53.749959 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:b7:2e", ip: ""} in network mk-old-k8s-version-788237: {Iface:virbr3 ExpiryTime:2024-01-16 04:13:01 +0000 UTC Type:0 Mac:52:54:00:64:b7:2e Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:old-k8s-version-788237 Clientid:01:52:54:00:64:b7:2e}
	I0116 03:18:53.749982 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | domain old-k8s-version-788237 has defined IP address 192.168.39.91 and MAC address 52:54:00:64:b7:2e in network mk-old-k8s-version-788237
	I0116 03:18:53.750294 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetSSHPort
	I0116 03:18:53.750501 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetSSHKeyPath
	I0116 03:18:53.750814 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetSSHUsername
	I0116 03:18:53.751535 1011681 sshutil.go:53] new ssh client: &{IP:192.168.39.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/old-k8s-version-788237/id_rsa Username:docker}
	I0116 03:18:53.755634 1011681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43429
	I0116 03:18:53.756246 1011681 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:18:53.756894 1011681 main.go:141] libmachine: Using API Version  1
	I0116 03:18:53.756918 1011681 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:18:53.761942 1011681 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:18:53.765938 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetState
	I0116 03:18:53.769965 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .DriverName
	I0116 03:18:53.770273 1011681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40811
	I0116 03:18:53.770837 1011681 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:18:53.772568 1011681 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0116 03:18:53.771317 1011681 main.go:141] libmachine: Using API Version  1
	I0116 03:18:53.774128 1011681 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0116 03:18:53.772620 1011681 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:18:53.774150 1011681 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0116 03:18:53.774254 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetSSHHostname
	I0116 03:18:53.774578 1011681 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:18:53.775367 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetState
	I0116 03:18:53.778662 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | domain old-k8s-version-788237 has defined MAC address 52:54:00:64:b7:2e in network mk-old-k8s-version-788237
	I0116 03:18:53.778671 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .DriverName
	I0116 03:18:53.778694 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:b7:2e", ip: ""} in network mk-old-k8s-version-788237: {Iface:virbr3 ExpiryTime:2024-01-16 04:13:01 +0000 UTC Type:0 Mac:52:54:00:64:b7:2e Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:old-k8s-version-788237 Clientid:01:52:54:00:64:b7:2e}
	I0116 03:18:53.778716 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | domain old-k8s-version-788237 has defined IP address 192.168.39.91 and MAC address 52:54:00:64:b7:2e in network mk-old-k8s-version-788237
	I0116 03:18:53.781111 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetSSHPort
	I0116 03:18:53.781144 1011681 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0116 03:18:53.781161 1011681 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0116 03:18:53.781185 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetSSHHostname
	I0116 03:18:53.781359 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetSSHKeyPath
	I0116 03:18:53.781509 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetSSHUsername
	I0116 03:18:53.781647 1011681 sshutil.go:53] new ssh client: &{IP:192.168.39.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/old-k8s-version-788237/id_rsa Username:docker}
	I0116 03:18:53.784375 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | domain old-k8s-version-788237 has defined MAC address 52:54:00:64:b7:2e in network mk-old-k8s-version-788237
	I0116 03:18:53.784817 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:b7:2e", ip: ""} in network mk-old-k8s-version-788237: {Iface:virbr3 ExpiryTime:2024-01-16 04:13:01 +0000 UTC Type:0 Mac:52:54:00:64:b7:2e Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:old-k8s-version-788237 Clientid:01:52:54:00:64:b7:2e}
	I0116 03:18:53.784841 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | domain old-k8s-version-788237 has defined IP address 192.168.39.91 and MAC address 52:54:00:64:b7:2e in network mk-old-k8s-version-788237
	I0116 03:18:53.785021 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetSSHPort
	I0116 03:18:53.785248 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetSSHKeyPath
	I0116 03:18:53.785367 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .GetSSHUsername
	I0116 03:18:53.785586 1011681 sshutil.go:53] new ssh client: &{IP:192.168.39.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/old-k8s-version-788237/id_rsa Username:docker}
	I0116 03:18:53.920099 1011681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 03:18:53.964232 1011681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0116 03:18:53.983575 1011681 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0116 03:18:54.005702 1011681 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0116 03:18:54.005736 1011681 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0116 03:18:54.084574 1011681 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0116 03:18:54.084606 1011681 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0116 03:18:54.143597 1011681 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0116 03:18:54.143640 1011681 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0116 03:18:54.195269 1011681 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-788237" context rescaled to 1 replicas
	I0116 03:18:54.195324 1011681 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.91 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0116 03:18:54.197378 1011681 out.go:177] * Verifying Kubernetes components...
	I0116 03:18:54.198806 1011681 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 03:18:54.323439 1011681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0116 03:18:55.133484 1011681 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.169208691s)
	I0116 03:18:55.133595 1011681 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-788237" to be "Ready" ...
	I0116 03:18:55.133486 1011681 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.213323807s)
	I0116 03:18:55.133650 1011681 main.go:141] libmachine: Making call to close driver server
	I0116 03:18:55.133664 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .Close
	I0116 03:18:55.133531 1011681 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.149922539s)
	I0116 03:18:55.133873 1011681 start.go:929] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0116 03:18:55.133967 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | Closing plugin on server side
	I0116 03:18:55.133609 1011681 main.go:141] libmachine: Making call to close driver server
	I0116 03:18:55.133993 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .Close
	I0116 03:18:55.134363 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | Closing plugin on server side
	I0116 03:18:55.134402 1011681 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:18:55.134415 1011681 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:18:55.134426 1011681 main.go:141] libmachine: Making call to close driver server
	I0116 03:18:55.134439 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .Close
	I0116 03:18:55.134750 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | Closing plugin on server side
	I0116 03:18:55.134766 1011681 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:18:55.134781 1011681 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:18:55.135982 1011681 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:18:55.136002 1011681 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:18:55.136014 1011681 main.go:141] libmachine: Making call to close driver server
	I0116 03:18:55.136046 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .Close
	I0116 03:18:55.136623 1011681 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:18:55.136656 1011681 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:18:53.174208 1011955 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 03:18:54.899603 1011955 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.026351829s)
	I0116 03:18:54.899706 1011955 start.go:929] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I0116 03:18:55.340175 1011955 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.397688954s)
	I0116 03:18:55.340238 1011955 main.go:141] libmachine: Making call to close driver server
	I0116 03:18:55.340252 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .Close
	I0116 03:18:55.340413 1011955 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.398670161s)
	I0116 03:18:55.340439 1011955 main.go:141] libmachine: Making call to close driver server
	I0116 03:18:55.340449 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .Close
	I0116 03:18:55.344833 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | Closing plugin on server side
	I0116 03:18:55.344839 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | Closing plugin on server side
	I0116 03:18:55.344858 1011955 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:18:55.344858 1011955 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:18:55.344871 1011955 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:18:55.344877 1011955 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:18:55.344886 1011955 main.go:141] libmachine: Making call to close driver server
	I0116 03:18:55.344889 1011955 main.go:141] libmachine: Making call to close driver server
	I0116 03:18:55.344897 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .Close
	I0116 03:18:55.344899 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .Close
	I0116 03:18:55.345154 1011955 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:18:55.345172 1011955 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:18:55.345207 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | Closing plugin on server side
	I0116 03:18:55.345229 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | Closing plugin on server side
	I0116 03:18:55.345311 1011955 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:18:55.345328 1011955 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:18:55.411967 1011955 main.go:141] libmachine: Making call to close driver server
	I0116 03:18:55.412006 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .Close
	I0116 03:18:55.412382 1011955 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:18:55.412402 1011955 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:18:55.229555 1011681 node_ready.go:49] node "old-k8s-version-788237" has status "Ready":"True"
	I0116 03:18:55.229641 1011681 node_ready.go:38] duration metric: took 95.965741ms waiting for node "old-k8s-version-788237" to be "Ready" ...
	I0116 03:18:55.229667 1011681 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:18:55.290235 1011681 main.go:141] libmachine: Making call to close driver server
	I0116 03:18:55.290288 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .Close
	I0116 03:18:55.290652 1011681 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:18:55.290675 1011681 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:18:55.311952 1011681 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-qmzl6" in "kube-system" namespace to be "Ready" ...
	I0116 03:18:55.886230 1011681 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.562731329s)
	I0116 03:18:55.886302 1011681 main.go:141] libmachine: Making call to close driver server
	I0116 03:18:55.886324 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .Close
	I0116 03:18:55.886813 1011681 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:18:55.886840 1011681 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:18:55.886852 1011681 main.go:141] libmachine: Making call to close driver server
	I0116 03:18:55.886863 1011681 main.go:141] libmachine: (old-k8s-version-788237) Calling .Close
	I0116 03:18:55.889105 1011681 main.go:141] libmachine: (old-k8s-version-788237) DBG | Closing plugin on server side
	I0116 03:18:55.889151 1011681 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:18:55.889160 1011681 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:18:55.889171 1011681 addons.go:470] Verifying addon metrics-server=true in "old-k8s-version-788237"
	I0116 03:18:55.891206 1011681 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0116 03:18:55.952771 1011955 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.778522731s)
	I0116 03:18:55.952832 1011955 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-775571" to be "Ready" ...
	I0116 03:18:55.953294 1011955 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.887054667s)
	I0116 03:18:55.953343 1011955 main.go:141] libmachine: Making call to close driver server
	I0116 03:18:55.953359 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .Close
	I0116 03:18:55.956009 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) DBG | Closing plugin on server side
	I0116 03:18:55.956050 1011955 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:18:55.956072 1011955 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:18:55.956095 1011955 main.go:141] libmachine: Making call to close driver server
	I0116 03:18:55.956106 1011955 main.go:141] libmachine: (default-k8s-diff-port-775571) Calling .Close
	I0116 03:18:55.956401 1011955 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:18:55.956417 1011955 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:18:55.956428 1011955 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-775571"
	I0116 03:18:55.959261 1011955 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0116 03:18:55.893233 1011681 addons.go:505] enable addons completed in 2.203983589s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0116 03:18:57.320945 1011681 pod_ready.go:102] pod "coredns-5644d7b6d9-qmzl6" in "kube-system" namespace has status "Ready":"False"
	I0116 03:18:59.825898 1011681 pod_ready.go:102] pod "coredns-5644d7b6d9-qmzl6" in "kube-system" namespace has status "Ready":"False"
	I0116 03:18:55.960681 1011955 addons.go:505] enable addons completed in 3.311813314s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0116 03:18:55.983312 1011955 node_ready.go:49] node "default-k8s-diff-port-775571" has status "Ready":"True"
	I0116 03:18:55.983350 1011955 node_ready.go:38] duration metric: took 30.503183ms waiting for node "default-k8s-diff-port-775571" to be "Ready" ...
	I0116 03:18:55.983366 1011955 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:18:56.004432 1011955 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-mk795" in "kube-system" namespace to be "Ready" ...
	I0116 03:18:56.513965 1011955 pod_ready.go:92] pod "coredns-5dd5756b68-mk795" in "kube-system" namespace has status "Ready":"True"
	I0116 03:18:56.514083 1011955 pod_ready.go:81] duration metric: took 509.611409ms waiting for pod "coredns-5dd5756b68-mk795" in "kube-system" namespace to be "Ready" ...
	I0116 03:18:56.514148 1011955 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-775571" in "kube-system" namespace to be "Ready" ...
	I0116 03:18:56.524671 1011955 pod_ready.go:92] pod "etcd-default-k8s-diff-port-775571" in "kube-system" namespace has status "Ready":"True"
	I0116 03:18:56.524770 1011955 pod_ready.go:81] duration metric: took 10.59132ms waiting for pod "etcd-default-k8s-diff-port-775571" in "kube-system" namespace to be "Ready" ...
	I0116 03:18:56.524803 1011955 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-775571" in "kube-system" namespace to be "Ready" ...
	I0116 03:18:56.538471 1011955 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-775571" in "kube-system" namespace has status "Ready":"True"
	I0116 03:18:56.538581 1011955 pod_ready.go:81] duration metric: took 13.724762ms waiting for pod "kube-apiserver-default-k8s-diff-port-775571" in "kube-system" namespace to be "Ready" ...
	I0116 03:18:56.538616 1011955 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-775571" in "kube-system" namespace to be "Ready" ...
	I0116 03:18:56.549389 1011955 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-775571" in "kube-system" namespace has status "Ready":"True"
	I0116 03:18:56.549494 1011955 pod_ready.go:81] duration metric: took 10.835015ms waiting for pod "kube-controller-manager-default-k8s-diff-port-775571" in "kube-system" namespace to be "Ready" ...
	I0116 03:18:56.549524 1011955 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zw495" in "kube-system" namespace to be "Ready" ...
	I0116 03:18:56.757971 1011955 pod_ready.go:92] pod "kube-proxy-zw495" in "kube-system" namespace has status "Ready":"True"
	I0116 03:18:56.758009 1011955 pod_ready.go:81] duration metric: took 208.445706ms waiting for pod "kube-proxy-zw495" in "kube-system" namespace to be "Ready" ...
	I0116 03:18:56.758024 1011955 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-775571" in "kube-system" namespace to be "Ready" ...
	I0116 03:18:57.156938 1011955 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-775571" in "kube-system" namespace has status "Ready":"True"
	I0116 03:18:57.156972 1011955 pod_ready.go:81] duration metric: took 398.939705ms waiting for pod "kube-scheduler-default-k8s-diff-port-775571" in "kube-system" namespace to be "Ready" ...
	I0116 03:18:57.156983 1011955 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace to be "Ready" ...
	I0116 03:18:59.164487 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:18:59.818244 1011460 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (14.993735667s)
	I0116 03:18:59.818326 1011460 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 03:18:59.833153 1011460 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0116 03:18:59.842806 1011460 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0116 03:18:59.851950 1011460 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0116 03:18:59.852010 1011460 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0116 03:19:00.070447 1011460 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0116 03:19:00.320286 1011681 pod_ready.go:92] pod "coredns-5644d7b6d9-qmzl6" in "kube-system" namespace has status "Ready":"True"
	I0116 03:19:00.320320 1011681 pod_ready.go:81] duration metric: took 5.0083337s waiting for pod "coredns-5644d7b6d9-qmzl6" in "kube-system" namespace to be "Ready" ...
	I0116 03:19:00.320333 1011681 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tv7gz" in "kube-system" namespace to be "Ready" ...
	I0116 03:19:00.326637 1011681 pod_ready.go:92] pod "kube-proxy-tv7gz" in "kube-system" namespace has status "Ready":"True"
	I0116 03:19:00.326664 1011681 pod_ready.go:81] duration metric: took 6.322991ms waiting for pod "kube-proxy-tv7gz" in "kube-system" namespace to be "Ready" ...
	I0116 03:19:00.326677 1011681 pod_ready.go:38] duration metric: took 5.096991549s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:19:00.326699 1011681 api_server.go:52] waiting for apiserver process to appear ...
	I0116 03:19:00.326772 1011681 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:19:00.343804 1011681 api_server.go:72] duration metric: took 6.148440288s to wait for apiserver process to appear ...
	I0116 03:19:00.343832 1011681 api_server.go:88] waiting for apiserver healthz status ...
	I0116 03:19:00.343855 1011681 api_server.go:253] Checking apiserver healthz at https://192.168.39.91:8443/healthz ...
	I0116 03:19:00.351105 1011681 api_server.go:279] https://192.168.39.91:8443/healthz returned 200:
	ok
	I0116 03:19:00.352195 1011681 api_server.go:141] control plane version: v1.16.0
	I0116 03:19:00.352263 1011681 api_server.go:131] duration metric: took 8.420277ms to wait for apiserver health ...
	I0116 03:19:00.352283 1011681 system_pods.go:43] waiting for kube-system pods to appear ...
	I0116 03:19:00.361924 1011681 system_pods.go:59] 4 kube-system pods found
	I0116 03:19:00.361952 1011681 system_pods.go:61] "coredns-5644d7b6d9-qmzl6" [3e4b23ca-c18b-4158-b15b-df53326b384c] Running
	I0116 03:19:00.361957 1011681 system_pods.go:61] "kube-proxy-tv7gz" [a1bf5e59-b2ae-489c-8297-25ad7c456303] Running
	I0116 03:19:00.361963 1011681 system_pods.go:61] "metrics-server-74d5856cc6-tx8jt" [790d860a-d27f-4535-94d8-64f40cb79071] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:19:00.361968 1011681 system_pods.go:61] "storage-provisioner" [aa5fac91-1606-4716-a04a-18e9d80c926b] Running
	I0116 03:19:00.361977 1011681 system_pods.go:74] duration metric: took 9.67913ms to wait for pod list to return data ...
	I0116 03:19:00.361987 1011681 default_sa.go:34] waiting for default service account to be created ...
	I0116 03:19:00.364600 1011681 default_sa.go:45] found service account: "default"
	I0116 03:19:00.364630 1011681 default_sa.go:55] duration metric: took 2.635157ms for default service account to be created ...
	I0116 03:19:00.364642 1011681 system_pods.go:116] waiting for k8s-apps to be running ...
	I0116 03:19:00.368386 1011681 system_pods.go:86] 4 kube-system pods found
	I0116 03:19:00.368409 1011681 system_pods.go:89] "coredns-5644d7b6d9-qmzl6" [3e4b23ca-c18b-4158-b15b-df53326b384c] Running
	I0116 03:19:00.368416 1011681 system_pods.go:89] "kube-proxy-tv7gz" [a1bf5e59-b2ae-489c-8297-25ad7c456303] Running
	I0116 03:19:00.368423 1011681 system_pods.go:89] "metrics-server-74d5856cc6-tx8jt" [790d860a-d27f-4535-94d8-64f40cb79071] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:19:00.368430 1011681 system_pods.go:89] "storage-provisioner" [aa5fac91-1606-4716-a04a-18e9d80c926b] Running
	I0116 03:19:00.368454 1011681 retry.go:31] will retry after 285.445367ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0116 03:19:00.660996 1011681 system_pods.go:86] 4 kube-system pods found
	I0116 03:19:00.661033 1011681 system_pods.go:89] "coredns-5644d7b6d9-qmzl6" [3e4b23ca-c18b-4158-b15b-df53326b384c] Running
	I0116 03:19:00.661040 1011681 system_pods.go:89] "kube-proxy-tv7gz" [a1bf5e59-b2ae-489c-8297-25ad7c456303] Running
	I0116 03:19:00.661047 1011681 system_pods.go:89] "metrics-server-74d5856cc6-tx8jt" [790d860a-d27f-4535-94d8-64f40cb79071] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:19:00.661055 1011681 system_pods.go:89] "storage-provisioner" [aa5fac91-1606-4716-a04a-18e9d80c926b] Running
	I0116 03:19:00.661079 1011681 retry.go:31] will retry after 334.380732ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0116 03:19:01.000372 1011681 system_pods.go:86] 4 kube-system pods found
	I0116 03:19:01.000401 1011681 system_pods.go:89] "coredns-5644d7b6d9-qmzl6" [3e4b23ca-c18b-4158-b15b-df53326b384c] Running
	I0116 03:19:01.000407 1011681 system_pods.go:89] "kube-proxy-tv7gz" [a1bf5e59-b2ae-489c-8297-25ad7c456303] Running
	I0116 03:19:01.000413 1011681 system_pods.go:89] "metrics-server-74d5856cc6-tx8jt" [790d860a-d27f-4535-94d8-64f40cb79071] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:19:01.000418 1011681 system_pods.go:89] "storage-provisioner" [aa5fac91-1606-4716-a04a-18e9d80c926b] Running
	I0116 03:19:01.000437 1011681 retry.go:31] will retry after 432.029845ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0116 03:19:01.437761 1011681 system_pods.go:86] 4 kube-system pods found
	I0116 03:19:01.437794 1011681 system_pods.go:89] "coredns-5644d7b6d9-qmzl6" [3e4b23ca-c18b-4158-b15b-df53326b384c] Running
	I0116 03:19:01.437817 1011681 system_pods.go:89] "kube-proxy-tv7gz" [a1bf5e59-b2ae-489c-8297-25ad7c456303] Running
	I0116 03:19:01.437827 1011681 system_pods.go:89] "metrics-server-74d5856cc6-tx8jt" [790d860a-d27f-4535-94d8-64f40cb79071] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:19:01.437835 1011681 system_pods.go:89] "storage-provisioner" [aa5fac91-1606-4716-a04a-18e9d80c926b] Running
	I0116 03:19:01.437857 1011681 retry.go:31] will retry after 542.969865ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0116 03:19:01.985932 1011681 system_pods.go:86] 4 kube-system pods found
	I0116 03:19:01.985965 1011681 system_pods.go:89] "coredns-5644d7b6d9-qmzl6" [3e4b23ca-c18b-4158-b15b-df53326b384c] Running
	I0116 03:19:01.985970 1011681 system_pods.go:89] "kube-proxy-tv7gz" [a1bf5e59-b2ae-489c-8297-25ad7c456303] Running
	I0116 03:19:01.985977 1011681 system_pods.go:89] "metrics-server-74d5856cc6-tx8jt" [790d860a-d27f-4535-94d8-64f40cb79071] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:19:01.985984 1011681 system_pods.go:89] "storage-provisioner" [aa5fac91-1606-4716-a04a-18e9d80c926b] Running
	I0116 03:19:01.986006 1011681 retry.go:31] will retry after 682.538217ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0116 03:19:02.673234 1011681 system_pods.go:86] 4 kube-system pods found
	I0116 03:19:02.673268 1011681 system_pods.go:89] "coredns-5644d7b6d9-qmzl6" [3e4b23ca-c18b-4158-b15b-df53326b384c] Running
	I0116 03:19:02.673274 1011681 system_pods.go:89] "kube-proxy-tv7gz" [a1bf5e59-b2ae-489c-8297-25ad7c456303] Running
	I0116 03:19:02.673280 1011681 system_pods.go:89] "metrics-server-74d5856cc6-tx8jt" [790d860a-d27f-4535-94d8-64f40cb79071] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:19:02.673286 1011681 system_pods.go:89] "storage-provisioner" [aa5fac91-1606-4716-a04a-18e9d80c926b] Running
	I0116 03:19:02.673305 1011681 retry.go:31] will retry after 865.818681ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0116 03:19:03.544313 1011681 system_pods.go:86] 4 kube-system pods found
	I0116 03:19:03.544355 1011681 system_pods.go:89] "coredns-5644d7b6d9-qmzl6" [3e4b23ca-c18b-4158-b15b-df53326b384c] Running
	I0116 03:19:03.544363 1011681 system_pods.go:89] "kube-proxy-tv7gz" [a1bf5e59-b2ae-489c-8297-25ad7c456303] Running
	I0116 03:19:03.544373 1011681 system_pods.go:89] "metrics-server-74d5856cc6-tx8jt" [790d860a-d27f-4535-94d8-64f40cb79071] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:19:03.544383 1011681 system_pods.go:89] "storage-provisioner" [aa5fac91-1606-4716-a04a-18e9d80c926b] Running
	I0116 03:19:03.544407 1011681 retry.go:31] will retry after 754.732197ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0116 03:19:04.304165 1011681 system_pods.go:86] 4 kube-system pods found
	I0116 03:19:04.304205 1011681 system_pods.go:89] "coredns-5644d7b6d9-qmzl6" [3e4b23ca-c18b-4158-b15b-df53326b384c] Running
	I0116 03:19:04.304217 1011681 system_pods.go:89] "kube-proxy-tv7gz" [a1bf5e59-b2ae-489c-8297-25ad7c456303] Running
	I0116 03:19:04.304227 1011681 system_pods.go:89] "metrics-server-74d5856cc6-tx8jt" [790d860a-d27f-4535-94d8-64f40cb79071] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:19:04.304235 1011681 system_pods.go:89] "storage-provisioner" [aa5fac91-1606-4716-a04a-18e9d80c926b] Running
	I0116 03:19:04.304258 1011681 retry.go:31] will retry after 1.101452697s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0116 03:19:01.164856 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:19:03.165951 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:19:05.166097 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:19:05.411683 1011681 system_pods.go:86] 4 kube-system pods found
	I0116 03:19:05.411726 1011681 system_pods.go:89] "coredns-5644d7b6d9-qmzl6" [3e4b23ca-c18b-4158-b15b-df53326b384c] Running
	I0116 03:19:05.411736 1011681 system_pods.go:89] "kube-proxy-tv7gz" [a1bf5e59-b2ae-489c-8297-25ad7c456303] Running
	I0116 03:19:05.411750 1011681 system_pods.go:89] "metrics-server-74d5856cc6-tx8jt" [790d860a-d27f-4535-94d8-64f40cb79071] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:19:05.411758 1011681 system_pods.go:89] "storage-provisioner" [aa5fac91-1606-4716-a04a-18e9d80c926b] Running
	I0116 03:19:05.411781 1011681 retry.go:31] will retry after 1.524854445s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0116 03:19:06.941891 1011681 system_pods.go:86] 4 kube-system pods found
	I0116 03:19:06.941929 1011681 system_pods.go:89] "coredns-5644d7b6d9-qmzl6" [3e4b23ca-c18b-4158-b15b-df53326b384c] Running
	I0116 03:19:06.941939 1011681 system_pods.go:89] "kube-proxy-tv7gz" [a1bf5e59-b2ae-489c-8297-25ad7c456303] Running
	I0116 03:19:06.941949 1011681 system_pods.go:89] "metrics-server-74d5856cc6-tx8jt" [790d860a-d27f-4535-94d8-64f40cb79071] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:19:06.941957 1011681 system_pods.go:89] "storage-provisioner" [aa5fac91-1606-4716-a04a-18e9d80c926b] Running
	I0116 03:19:06.941984 1011681 retry.go:31] will retry after 1.460454781s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0116 03:19:08.408630 1011681 system_pods.go:86] 4 kube-system pods found
	I0116 03:19:08.408662 1011681 system_pods.go:89] "coredns-5644d7b6d9-qmzl6" [3e4b23ca-c18b-4158-b15b-df53326b384c] Running
	I0116 03:19:08.408668 1011681 system_pods.go:89] "kube-proxy-tv7gz" [a1bf5e59-b2ae-489c-8297-25ad7c456303] Running
	I0116 03:19:08.408687 1011681 system_pods.go:89] "metrics-server-74d5856cc6-tx8jt" [790d860a-d27f-4535-94d8-64f40cb79071] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:19:08.408692 1011681 system_pods.go:89] "storage-provisioner" [aa5fac91-1606-4716-a04a-18e9d80c926b] Running
	I0116 03:19:08.408713 1011681 retry.go:31] will retry after 1.769662932s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0116 03:19:10.184053 1011681 system_pods.go:86] 4 kube-system pods found
	I0116 03:19:10.184081 1011681 system_pods.go:89] "coredns-5644d7b6d9-qmzl6" [3e4b23ca-c18b-4158-b15b-df53326b384c] Running
	I0116 03:19:10.184086 1011681 system_pods.go:89] "kube-proxy-tv7gz" [a1bf5e59-b2ae-489c-8297-25ad7c456303] Running
	I0116 03:19:10.184093 1011681 system_pods.go:89] "metrics-server-74d5856cc6-tx8jt" [790d860a-d27f-4535-94d8-64f40cb79071] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:19:10.184098 1011681 system_pods.go:89] "storage-provisioner" [aa5fac91-1606-4716-a04a-18e9d80c926b] Running
	I0116 03:19:10.184117 1011681 retry.go:31] will retry after 3.059139s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0116 03:19:07.169102 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:19:09.666541 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:19:11.938237 1011460 kubeadm.go:322] [init] Using Kubernetes version: v1.29.0-rc.2
	I0116 03:19:11.938354 1011460 kubeadm.go:322] [preflight] Running pre-flight checks
	I0116 03:19:11.938572 1011460 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0116 03:19:11.939095 1011460 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0116 03:19:11.939269 1011460 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0116 03:19:11.939370 1011460 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0116 03:19:11.941237 1011460 out.go:204]   - Generating certificates and keys ...
	I0116 03:19:11.941348 1011460 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0116 03:19:11.941482 1011460 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0116 03:19:11.941579 1011460 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0116 03:19:11.941646 1011460 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0116 03:19:11.941733 1011460 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0116 03:19:11.941821 1011460 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0116 03:19:11.941908 1011460 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0116 03:19:11.941959 1011460 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0116 03:19:11.942018 1011460 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0116 03:19:11.942114 1011460 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0116 03:19:11.942208 1011460 kubeadm.go:322] [certs] Using the existing "sa" key
	I0116 03:19:11.942278 1011460 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0116 03:19:11.942348 1011460 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0116 03:19:11.942424 1011460 kubeadm.go:322] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0116 03:19:11.942487 1011460 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0116 03:19:11.942579 1011460 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0116 03:19:11.942659 1011460 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0116 03:19:11.942779 1011460 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0116 03:19:11.942856 1011460 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0116 03:19:11.944468 1011460 out.go:204]   - Booting up control plane ...
	I0116 03:19:11.944556 1011460 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0116 03:19:11.944624 1011460 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0116 03:19:11.944694 1011460 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0116 03:19:11.944847 1011460 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0116 03:19:11.944975 1011460 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0116 03:19:11.945039 1011460 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0116 03:19:11.945209 1011460 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0116 03:19:11.945282 1011460 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.502907 seconds
	I0116 03:19:11.945373 1011460 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0116 03:19:11.945476 1011460 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0116 03:19:11.945541 1011460 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0116 03:19:11.945750 1011460 kubeadm.go:322] [mark-control-plane] Marking the node no-preload-934668 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0116 03:19:11.945823 1011460 kubeadm.go:322] [bootstrap-token] Using token: pj08z0.5ut3mf4afujawh3s
	I0116 03:19:11.947396 1011460 out.go:204]   - Configuring RBAC rules ...
	I0116 03:19:11.947532 1011460 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0116 03:19:11.947645 1011460 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0116 03:19:11.947822 1011460 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0116 03:19:11.948000 1011460 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0116 03:19:11.948094 1011460 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0116 03:19:11.948182 1011460 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0116 03:19:11.948281 1011460 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0116 03:19:11.948327 1011460 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0116 03:19:11.948373 1011460 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0116 03:19:11.948383 1011460 kubeadm.go:322] 
	I0116 03:19:11.948440 1011460 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0116 03:19:11.948449 1011460 kubeadm.go:322] 
	I0116 03:19:11.948546 1011460 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0116 03:19:11.948567 1011460 kubeadm.go:322] 
	I0116 03:19:11.948614 1011460 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0116 03:19:11.948725 1011460 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0116 03:19:11.948805 1011460 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0116 03:19:11.948815 1011460 kubeadm.go:322] 
	I0116 03:19:11.948891 1011460 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0116 03:19:11.948901 1011460 kubeadm.go:322] 
	I0116 03:19:11.948979 1011460 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0116 03:19:11.949011 1011460 kubeadm.go:322] 
	I0116 03:19:11.949086 1011460 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0116 03:19:11.949215 1011460 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0116 03:19:11.949311 1011460 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0116 03:19:11.949332 1011460 kubeadm.go:322] 
	I0116 03:19:11.949463 1011460 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0116 03:19:11.949576 1011460 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0116 03:19:11.949590 1011460 kubeadm.go:322] 
	I0116 03:19:11.949688 1011460 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token pj08z0.5ut3mf4afujawh3s \
	I0116 03:19:11.949837 1011460 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:2b2d63eda0b757db09586d44c0338fc83976b062d99727312f950d3846e4844b \
	I0116 03:19:11.949877 1011460 kubeadm.go:322] 	--control-plane 
	I0116 03:19:11.949890 1011460 kubeadm.go:322] 
	I0116 03:19:11.949997 1011460 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0116 03:19:11.950009 1011460 kubeadm.go:322] 
	I0116 03:19:11.950108 1011460 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token pj08z0.5ut3mf4afujawh3s \
	I0116 03:19:11.950232 1011460 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:2b2d63eda0b757db09586d44c0338fc83976b062d99727312f950d3846e4844b 
	I0116 03:19:11.950269 1011460 cni.go:84] Creating CNI manager for ""
	I0116 03:19:11.950284 1011460 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 03:19:11.952013 1011460 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0116 03:19:11.953373 1011460 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0116 03:19:12.016915 1011460 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0116 03:19:12.042169 1011460 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0116 03:19:12.042259 1011460 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=880ec6d67bbc7fe8882aaa6543d5a07427c973fd minikube.k8s.io/name=no-preload-934668 minikube.k8s.io/updated_at=2024_01_16T03_19_12_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:19:12.042266 1011460 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:19:12.092434 1011460 ops.go:34] apiserver oom_adj: -16
	I0116 03:19:13.250984 1011681 system_pods.go:86] 4 kube-system pods found
	I0116 03:19:13.251026 1011681 system_pods.go:89] "coredns-5644d7b6d9-qmzl6" [3e4b23ca-c18b-4158-b15b-df53326b384c] Running
	I0116 03:19:13.251035 1011681 system_pods.go:89] "kube-proxy-tv7gz" [a1bf5e59-b2ae-489c-8297-25ad7c456303] Running
	I0116 03:19:13.251046 1011681 system_pods.go:89] "metrics-server-74d5856cc6-tx8jt" [790d860a-d27f-4535-94d8-64f40cb79071] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:19:13.251054 1011681 system_pods.go:89] "storage-provisioner" [aa5fac91-1606-4716-a04a-18e9d80c926b] Running
	I0116 03:19:13.251078 1011681 retry.go:31] will retry after 3.301960932s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0116 03:19:12.168237 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:19:14.669074 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:19:12.372548 1011460 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:19:12.873171 1011460 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:19:13.372932 1011460 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:19:13.873086 1011460 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:19:14.373328 1011460 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:19:14.873249 1011460 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:19:15.372564 1011460 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:19:15.873604 1011460 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:19:16.372846 1011460 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:19:16.873652 1011460 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:19:16.558984 1011681 system_pods.go:86] 4 kube-system pods found
	I0116 03:19:16.559016 1011681 system_pods.go:89] "coredns-5644d7b6d9-qmzl6" [3e4b23ca-c18b-4158-b15b-df53326b384c] Running
	I0116 03:19:16.559023 1011681 system_pods.go:89] "kube-proxy-tv7gz" [a1bf5e59-b2ae-489c-8297-25ad7c456303] Running
	I0116 03:19:16.559031 1011681 system_pods.go:89] "metrics-server-74d5856cc6-tx8jt" [790d860a-d27f-4535-94d8-64f40cb79071] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:19:16.559036 1011681 system_pods.go:89] "storage-provisioner" [aa5fac91-1606-4716-a04a-18e9d80c926b] Running
	I0116 03:19:16.559056 1011681 retry.go:31] will retry after 4.433753761s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0116 03:19:17.166555 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:19:19.666500 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:19:17.373434 1011460 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:19:17.873591 1011460 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:19:18.373340 1011460 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:19:18.873267 1011460 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:19:19.373311 1011460 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:19:19.873538 1011460 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:19:20.372770 1011460 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:19:20.873645 1011460 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:19:21.373033 1011460 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:19:21.872773 1011460 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:19:22.372607 1011460 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:19:22.872582 1011460 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:19:23.372659 1011460 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:19:23.873410 1011460 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:19:24.372682 1011460 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:19:24.873365 1011460 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:19:24.989170 1011460 kubeadm.go:1088] duration metric: took 12.946988185s to wait for elevateKubeSystemPrivileges.
	I0116 03:19:24.989221 1011460 kubeadm.go:406] StartCluster complete in 5m13.370173315s
	I0116 03:19:24.989247 1011460 settings.go:142] acquiring lock: {Name:mk39c69fb5454efd5afab021c02c3bec1f1b4e6b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:19:24.989351 1011460 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17967-971255/kubeconfig
	I0116 03:19:24.991793 1011460 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17967-971255/kubeconfig: {Name:mk5ca3018274b0a03599cf3631785c0629f81a67 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:19:24.992117 1011460 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0116 03:19:24.992155 1011460 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0116 03:19:24.992266 1011460 addons.go:69] Setting storage-provisioner=true in profile "no-preload-934668"
	I0116 03:19:24.992274 1011460 addons.go:69] Setting default-storageclass=true in profile "no-preload-934668"
	I0116 03:19:24.992291 1011460 addons.go:234] Setting addon storage-provisioner=true in "no-preload-934668"
	I0116 03:19:24.992295 1011460 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-934668"
	I0116 03:19:24.992296 1011460 addons.go:69] Setting metrics-server=true in profile "no-preload-934668"
	I0116 03:19:24.992325 1011460 addons.go:234] Setting addon metrics-server=true in "no-preload-934668"
	W0116 03:19:24.992338 1011460 addons.go:243] addon metrics-server should already be in state true
	I0116 03:19:24.992393 1011460 config.go:182] Loaded profile config "no-preload-934668": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	W0116 03:19:24.992300 1011460 addons.go:243] addon storage-provisioner should already be in state true
	I0116 03:19:24.992415 1011460 host.go:66] Checking if "no-preload-934668" exists ...
	I0116 03:19:24.992456 1011460 host.go:66] Checking if "no-preload-934668" exists ...
	I0116 03:19:24.992754 1011460 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:19:24.992775 1011460 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:19:24.992810 1011460 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:19:24.992831 1011460 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:19:24.992871 1011460 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:19:24.992959 1011460 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:19:25.010903 1011460 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33013
	I0116 03:19:25.011636 1011460 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:19:25.012150 1011460 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39939
	I0116 03:19:25.012167 1011460 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39475
	I0116 03:19:25.012223 1011460 main.go:141] libmachine: Using API Version  1
	I0116 03:19:25.012247 1011460 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:19:25.012568 1011460 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:19:25.012669 1011460 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:19:25.012784 1011460 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:19:25.013013 1011460 main.go:141] libmachine: Using API Version  1
	I0116 03:19:25.013037 1011460 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:19:25.013189 1011460 main.go:141] libmachine: Using API Version  1
	I0116 03:19:25.013202 1011460 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:19:25.013647 1011460 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:19:25.013677 1011460 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:19:25.014037 1011460 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:19:25.014040 1011460 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:19:25.014620 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetState
	I0116 03:19:25.014622 1011460 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:19:25.014713 1011460 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:19:25.018506 1011460 addons.go:234] Setting addon default-storageclass=true in "no-preload-934668"
	W0116 03:19:25.018563 1011460 addons.go:243] addon default-storageclass should already be in state true
	I0116 03:19:25.018603 1011460 host.go:66] Checking if "no-preload-934668" exists ...
	I0116 03:19:25.019024 1011460 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:19:25.019089 1011460 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:19:25.034161 1011460 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43463
	I0116 03:19:25.034400 1011460 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40521
	I0116 03:19:25.034909 1011460 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:19:25.035027 1011460 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:19:25.035536 1011460 main.go:141] libmachine: Using API Version  1
	I0116 03:19:25.035555 1011460 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:19:25.035687 1011460 main.go:141] libmachine: Using API Version  1
	I0116 03:19:25.035698 1011460 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:19:25.036064 1011460 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:19:25.036123 1011460 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:19:25.036296 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetState
	I0116 03:19:25.036323 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetState
	I0116 03:19:25.037452 1011460 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39249
	I0116 03:19:25.038065 1011460 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:19:25.038653 1011460 main.go:141] libmachine: (no-preload-934668) Calling .DriverName
	I0116 03:19:25.038797 1011460 main.go:141] libmachine: Using API Version  1
	I0116 03:19:25.038807 1011460 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:19:25.040516 1011460 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 03:19:25.039169 1011460 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:19:25.039494 1011460 main.go:141] libmachine: (no-preload-934668) Calling .DriverName
	I0116 03:19:25.041993 1011460 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 03:19:25.042021 1011460 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0116 03:19:25.042042 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetSSHHostname
	I0116 03:19:25.043350 1011460 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0116 03:19:20.998514 1011681 system_pods.go:86] 4 kube-system pods found
	I0116 03:19:20.998541 1011681 system_pods.go:89] "coredns-5644d7b6d9-qmzl6" [3e4b23ca-c18b-4158-b15b-df53326b384c] Running
	I0116 03:19:20.998546 1011681 system_pods.go:89] "kube-proxy-tv7gz" [a1bf5e59-b2ae-489c-8297-25ad7c456303] Running
	I0116 03:19:20.998553 1011681 system_pods.go:89] "metrics-server-74d5856cc6-tx8jt" [790d860a-d27f-4535-94d8-64f40cb79071] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:19:20.998558 1011681 system_pods.go:89] "storage-provisioner" [aa5fac91-1606-4716-a04a-18e9d80c926b] Running
	I0116 03:19:20.998576 1011681 retry.go:31] will retry after 6.19070677s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0116 03:19:22.164973 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:19:24.165241 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:19:25.044790 1011460 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0116 03:19:25.044804 1011460 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0116 03:19:25.044820 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetSSHHostname
	I0116 03:19:25.042734 1011460 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:19:25.044907 1011460 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:19:25.045505 1011460 main.go:141] libmachine: (no-preload-934668) DBG | domain no-preload-934668 has defined MAC address 52:54:00:96:89:86 in network mk-no-preload-934668
	I0116 03:19:25.046226 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetSSHPort
	I0116 03:19:25.046284 1011460 main.go:141] libmachine: (no-preload-934668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:89:86", ip: ""} in network mk-no-preload-934668: {Iface:virbr2 ExpiryTime:2024-01-16 04:13:45 +0000 UTC Type:0 Mac:52:54:00:96:89:86 Iaid: IPaddr:192.168.50.29 Prefix:24 Hostname:no-preload-934668 Clientid:01:52:54:00:96:89:86}
	I0116 03:19:25.046404 1011460 main.go:141] libmachine: (no-preload-934668) DBG | domain no-preload-934668 has defined IP address 192.168.50.29 and MAC address 52:54:00:96:89:86 in network mk-no-preload-934668
	I0116 03:19:25.046434 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetSSHKeyPath
	I0116 03:19:25.046724 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetSSHUsername
	I0116 03:19:25.046878 1011460 sshutil.go:53] new ssh client: &{IP:192.168.50.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/no-preload-934668/id_rsa Username:docker}
	I0116 03:19:25.048780 1011460 main.go:141] libmachine: (no-preload-934668) DBG | domain no-preload-934668 has defined MAC address 52:54:00:96:89:86 in network mk-no-preload-934668
	I0116 03:19:25.049237 1011460 main.go:141] libmachine: (no-preload-934668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:89:86", ip: ""} in network mk-no-preload-934668: {Iface:virbr2 ExpiryTime:2024-01-16 04:13:45 +0000 UTC Type:0 Mac:52:54:00:96:89:86 Iaid: IPaddr:192.168.50.29 Prefix:24 Hostname:no-preload-934668 Clientid:01:52:54:00:96:89:86}
	I0116 03:19:25.049260 1011460 main.go:141] libmachine: (no-preload-934668) DBG | domain no-preload-934668 has defined IP address 192.168.50.29 and MAC address 52:54:00:96:89:86 in network mk-no-preload-934668
	I0116 03:19:25.049432 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetSSHPort
	I0116 03:19:25.049846 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetSSHKeyPath
	I0116 03:19:25.050200 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetSSHUsername
	I0116 03:19:25.050376 1011460 sshutil.go:53] new ssh client: &{IP:192.168.50.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/no-preload-934668/id_rsa Username:docker}
	I0116 03:19:25.062306 1011460 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40141
	I0116 03:19:25.062765 1011460 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:19:25.063248 1011460 main.go:141] libmachine: Using API Version  1
	I0116 03:19:25.063261 1011460 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:19:25.063609 1011460 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:19:25.063805 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetState
	I0116 03:19:25.065537 1011460 main.go:141] libmachine: (no-preload-934668) Calling .DriverName
	I0116 03:19:25.065785 1011460 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0116 03:19:25.065818 1011460 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0116 03:19:25.065841 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetSSHHostname
	I0116 03:19:25.068664 1011460 main.go:141] libmachine: (no-preload-934668) DBG | domain no-preload-934668 has defined MAC address 52:54:00:96:89:86 in network mk-no-preload-934668
	I0116 03:19:25.069102 1011460 main.go:141] libmachine: (no-preload-934668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:89:86", ip: ""} in network mk-no-preload-934668: {Iface:virbr2 ExpiryTime:2024-01-16 04:13:45 +0000 UTC Type:0 Mac:52:54:00:96:89:86 Iaid: IPaddr:192.168.50.29 Prefix:24 Hostname:no-preload-934668 Clientid:01:52:54:00:96:89:86}
	I0116 03:19:25.069125 1011460 main.go:141] libmachine: (no-preload-934668) DBG | domain no-preload-934668 has defined IP address 192.168.50.29 and MAC address 52:54:00:96:89:86 in network mk-no-preload-934668
	I0116 03:19:25.069273 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetSSHPort
	I0116 03:19:25.069454 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetSSHKeyPath
	I0116 03:19:25.069627 1011460 main.go:141] libmachine: (no-preload-934668) Calling .GetSSHUsername
	I0116 03:19:25.069763 1011460 sshutil.go:53] new ssh client: &{IP:192.168.50.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/no-preload-934668/id_rsa Username:docker}
	I0116 03:19:25.182658 1011460 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0116 03:19:25.209575 1011460 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 03:19:25.231221 1011460 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0116 03:19:25.231310 1011460 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0116 03:19:25.287263 1011460 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0116 03:19:25.337307 1011460 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0116 03:19:25.337350 1011460 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0116 03:19:25.433778 1011460 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0116 03:19:25.433821 1011460 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0116 03:19:25.507802 1011460 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0116 03:19:25.528239 1011460 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-934668" context rescaled to 1 replicas
	I0116 03:19:25.528282 1011460 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.29 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0116 03:19:25.530067 1011460 out.go:177] * Verifying Kubernetes components...
	I0116 03:19:25.532055 1011460 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 03:19:26.021224 1011460 start.go:929] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I0116 03:19:26.359779 1011460 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.072464523s)
	I0116 03:19:26.359844 1011460 main.go:141] libmachine: Making call to close driver server
	I0116 03:19:26.359859 1011460 main.go:141] libmachine: (no-preload-934668) Calling .Close
	I0116 03:19:26.359860 1011460 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.150243124s)
	I0116 03:19:26.359900 1011460 main.go:141] libmachine: Making call to close driver server
	I0116 03:19:26.359919 1011460 main.go:141] libmachine: (no-preload-934668) Calling .Close
	I0116 03:19:26.360228 1011460 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:19:26.360258 1011460 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:19:26.360269 1011460 main.go:141] libmachine: Making call to close driver server
	I0116 03:19:26.360278 1011460 main.go:141] libmachine: (no-preload-934668) Calling .Close
	I0116 03:19:26.360447 1011460 main.go:141] libmachine: (no-preload-934668) DBG | Closing plugin on server side
	I0116 03:19:26.360507 1011460 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:19:26.360546 1011460 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:19:26.360560 1011460 main.go:141] libmachine: (no-preload-934668) DBG | Closing plugin on server side
	I0116 03:19:26.361873 1011460 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:19:26.361895 1011460 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:19:26.361911 1011460 main.go:141] libmachine: Making call to close driver server
	I0116 03:19:26.361920 1011460 main.go:141] libmachine: (no-preload-934668) Calling .Close
	I0116 03:19:26.362297 1011460 main.go:141] libmachine: (no-preload-934668) DBG | Closing plugin on server side
	I0116 03:19:26.362339 1011460 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:19:26.362372 1011460 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:19:26.376371 1011460 main.go:141] libmachine: Making call to close driver server
	I0116 03:19:26.376405 1011460 main.go:141] libmachine: (no-preload-934668) Calling .Close
	I0116 03:19:26.376703 1011460 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:19:26.376722 1011460 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:19:26.607902 1011460 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.100046486s)
	I0116 03:19:26.607968 1011460 main.go:141] libmachine: Making call to close driver server
	I0116 03:19:26.607973 1011460 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.075879995s)
	I0116 03:19:26.608021 1011460 node_ready.go:35] waiting up to 6m0s for node "no-preload-934668" to be "Ready" ...
	I0116 03:19:26.607985 1011460 main.go:141] libmachine: (no-preload-934668) Calling .Close
	I0116 03:19:26.608450 1011460 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:19:26.608470 1011460 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:19:26.608483 1011460 main.go:141] libmachine: Making call to close driver server
	I0116 03:19:26.608493 1011460 main.go:141] libmachine: (no-preload-934668) Calling .Close
	I0116 03:19:26.608771 1011460 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:19:26.608791 1011460 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:19:26.608794 1011460 main.go:141] libmachine: (no-preload-934668) DBG | Closing plugin on server side
	I0116 03:19:26.608803 1011460 addons.go:470] Verifying addon metrics-server=true in "no-preload-934668"
	I0116 03:19:26.611385 1011460 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0116 03:19:26.612672 1011460 addons.go:505] enable addons completed in 1.620530835s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0116 03:19:26.611903 1011460 node_ready.go:49] node "no-preload-934668" has status "Ready":"True"
	I0116 03:19:26.612707 1011460 node_ready.go:38] duration metric: took 4.665246ms waiting for node "no-preload-934668" to be "Ready" ...
	I0116 03:19:26.612719 1011460 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:19:26.625443 1011460 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-64qzh" in "kube-system" namespace to be "Ready" ...
	I0116 03:19:27.195320 1011681 system_pods.go:86] 4 kube-system pods found
	I0116 03:19:27.195364 1011681 system_pods.go:89] "coredns-5644d7b6d9-qmzl6" [3e4b23ca-c18b-4158-b15b-df53326b384c] Running
	I0116 03:19:27.195375 1011681 system_pods.go:89] "kube-proxy-tv7gz" [a1bf5e59-b2ae-489c-8297-25ad7c456303] Running
	I0116 03:19:27.195388 1011681 system_pods.go:89] "metrics-server-74d5856cc6-tx8jt" [790d860a-d27f-4535-94d8-64f40cb79071] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:19:27.195396 1011681 system_pods.go:89] "storage-provisioner" [aa5fac91-1606-4716-a04a-18e9d80c926b] Running
	I0116 03:19:27.195423 1011681 retry.go:31] will retry after 6.009246504s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0116 03:19:26.166175 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:19:28.167332 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:19:27.632495 1011460 pod_ready.go:97] error getting pod "coredns-76f75df574-64qzh" in "kube-system" namespace (skipping!): pods "coredns-76f75df574-64qzh" not found
	I0116 03:19:27.632522 1011460 pod_ready.go:81] duration metric: took 1.007051516s waiting for pod "coredns-76f75df574-64qzh" in "kube-system" namespace to be "Ready" ...
	E0116 03:19:27.632534 1011460 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-76f75df574-64qzh" in "kube-system" namespace (skipping!): pods "coredns-76f75df574-64qzh" not found
	I0116 03:19:27.632541 1011460 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-k2kc7" in "kube-system" namespace to be "Ready" ...
	I0116 03:19:29.640682 1011460 pod_ready.go:92] pod "coredns-76f75df574-k2kc7" in "kube-system" namespace has status "Ready":"True"
	I0116 03:19:29.640718 1011460 pod_ready.go:81] duration metric: took 2.008169192s waiting for pod "coredns-76f75df574-k2kc7" in "kube-system" namespace to be "Ready" ...
	I0116 03:19:29.640736 1011460 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-934668" in "kube-system" namespace to be "Ready" ...
	I0116 03:19:29.646552 1011460 pod_ready.go:92] pod "etcd-no-preload-934668" in "kube-system" namespace has status "Ready":"True"
	I0116 03:19:29.646579 1011460 pod_ready.go:81] duration metric: took 5.835401ms waiting for pod "etcd-no-preload-934668" in "kube-system" namespace to be "Ready" ...
	I0116 03:19:29.646589 1011460 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-934668" in "kube-system" namespace to be "Ready" ...
	I0116 03:19:29.651970 1011460 pod_ready.go:92] pod "kube-apiserver-no-preload-934668" in "kube-system" namespace has status "Ready":"True"
	I0116 03:19:29.652004 1011460 pod_ready.go:81] duration metric: took 5.40828ms waiting for pod "kube-apiserver-no-preload-934668" in "kube-system" namespace to be "Ready" ...
	I0116 03:19:29.652018 1011460 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-934668" in "kube-system" namespace to be "Ready" ...
	I0116 03:19:29.658077 1011460 pod_ready.go:92] pod "kube-controller-manager-no-preload-934668" in "kube-system" namespace has status "Ready":"True"
	I0116 03:19:29.658104 1011460 pod_ready.go:81] duration metric: took 6.078615ms waiting for pod "kube-controller-manager-no-preload-934668" in "kube-system" namespace to be "Ready" ...
	I0116 03:19:29.658113 1011460 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fr424" in "kube-system" namespace to be "Ready" ...
	I0116 03:19:29.663585 1011460 pod_ready.go:92] pod "kube-proxy-fr424" in "kube-system" namespace has status "Ready":"True"
	I0116 03:19:29.663608 1011460 pod_ready.go:81] duration metric: took 5.488053ms waiting for pod "kube-proxy-fr424" in "kube-system" namespace to be "Ready" ...
	I0116 03:19:29.663617 1011460 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-934668" in "kube-system" namespace to be "Ready" ...
	I0116 03:19:30.037029 1011460 pod_ready.go:92] pod "kube-scheduler-no-preload-934668" in "kube-system" namespace has status "Ready":"True"
	I0116 03:19:30.037054 1011460 pod_ready.go:81] duration metric: took 373.431547ms waiting for pod "kube-scheduler-no-preload-934668" in "kube-system" namespace to be "Ready" ...
	I0116 03:19:30.037066 1011460 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace to be "Ready" ...
	I0116 03:19:32.045895 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:19:33.211194 1011681 system_pods.go:86] 5 kube-system pods found
	I0116 03:19:33.211224 1011681 system_pods.go:89] "coredns-5644d7b6d9-qmzl6" [3e4b23ca-c18b-4158-b15b-df53326b384c] Running
	I0116 03:19:33.211230 1011681 system_pods.go:89] "kube-proxy-tv7gz" [a1bf5e59-b2ae-489c-8297-25ad7c456303] Running
	I0116 03:19:33.211234 1011681 system_pods.go:89] "kube-scheduler-old-k8s-version-788237" [738a1d18-b8a8-429e-b335-a542be90f4db] Pending
	I0116 03:19:33.211240 1011681 system_pods.go:89] "metrics-server-74d5856cc6-tx8jt" [790d860a-d27f-4535-94d8-64f40cb79071] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:19:33.211245 1011681 system_pods.go:89] "storage-provisioner" [aa5fac91-1606-4716-a04a-18e9d80c926b] Running
	I0116 03:19:33.211264 1011681 retry.go:31] will retry after 6.865213703s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0116 03:19:30.664955 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:19:33.164999 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:19:35.168217 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:19:34.545787 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:19:37.045220 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:19:40.083281 1011681 system_pods.go:86] 5 kube-system pods found
	I0116 03:19:40.083312 1011681 system_pods.go:89] "coredns-5644d7b6d9-qmzl6" [3e4b23ca-c18b-4158-b15b-df53326b384c] Running
	I0116 03:19:40.083317 1011681 system_pods.go:89] "kube-proxy-tv7gz" [a1bf5e59-b2ae-489c-8297-25ad7c456303] Running
	I0116 03:19:40.083322 1011681 system_pods.go:89] "kube-scheduler-old-k8s-version-788237" [738a1d18-b8a8-429e-b335-a542be90f4db] Running
	I0116 03:19:40.083329 1011681 system_pods.go:89] "metrics-server-74d5856cc6-tx8jt" [790d860a-d27f-4535-94d8-64f40cb79071] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:19:40.083333 1011681 system_pods.go:89] "storage-provisioner" [aa5fac91-1606-4716-a04a-18e9d80c926b] Running
	I0116 03:19:40.083354 1011681 retry.go:31] will retry after 12.14535235s: missing components: etcd, kube-apiserver, kube-controller-manager
	I0116 03:19:37.664530 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:19:39.666312 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:19:39.544826 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:19:41.545124 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:19:42.167148 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:19:44.666332 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:19:44.046884 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:19:46.546221 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:19:47.165232 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:19:49.165989 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:19:49.045230 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:19:51.045508 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:19:52.235832 1011681 system_pods.go:86] 8 kube-system pods found
	I0116 03:19:52.235865 1011681 system_pods.go:89] "coredns-5644d7b6d9-qmzl6" [3e4b23ca-c18b-4158-b15b-df53326b384c] Running
	I0116 03:19:52.235870 1011681 system_pods.go:89] "etcd-old-k8s-version-788237" [d4e1632d-c3ce-47c0-a692-0d108bd3c46c] Running
	I0116 03:19:52.235874 1011681 system_pods.go:89] "kube-apiserver-old-k8s-version-788237" [6d662cac-b4ba-4b5a-a942-38056d2aab63] Running
	I0116 03:19:52.235878 1011681 system_pods.go:89] "kube-controller-manager-old-k8s-version-788237" [2ccd00ed-668e-40b6-b364-63e7a85d4fe9] Pending
	I0116 03:19:52.235882 1011681 system_pods.go:89] "kube-proxy-tv7gz" [a1bf5e59-b2ae-489c-8297-25ad7c456303] Running
	I0116 03:19:52.235887 1011681 system_pods.go:89] "kube-scheduler-old-k8s-version-788237" [738a1d18-b8a8-429e-b335-a542be90f4db] Running
	I0116 03:19:52.235892 1011681 system_pods.go:89] "metrics-server-74d5856cc6-tx8jt" [790d860a-d27f-4535-94d8-64f40cb79071] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:19:52.235897 1011681 system_pods.go:89] "storage-provisioner" [aa5fac91-1606-4716-a04a-18e9d80c926b] Running
	I0116 03:19:52.235916 1011681 retry.go:31] will retry after 13.113559392s: missing components: kube-controller-manager
	I0116 03:19:51.665249 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:19:53.667802 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:19:53.544777 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:19:55.545265 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:19:56.166884 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:19:58.167295 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:19:58.046171 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:00.545977 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:05.356292 1011681 system_pods.go:86] 8 kube-system pods found
	I0116 03:20:05.356332 1011681 system_pods.go:89] "coredns-5644d7b6d9-qmzl6" [3e4b23ca-c18b-4158-b15b-df53326b384c] Running
	I0116 03:20:05.356340 1011681 system_pods.go:89] "etcd-old-k8s-version-788237" [d4e1632d-c3ce-47c0-a692-0d108bd3c46c] Running
	I0116 03:20:05.356347 1011681 system_pods.go:89] "kube-apiserver-old-k8s-version-788237" [6d662cac-b4ba-4b5a-a942-38056d2aab63] Running
	I0116 03:20:05.356355 1011681 system_pods.go:89] "kube-controller-manager-old-k8s-version-788237" [2ccd00ed-668e-40b6-b364-63e7a85d4fe9] Running
	I0116 03:20:05.356361 1011681 system_pods.go:89] "kube-proxy-tv7gz" [a1bf5e59-b2ae-489c-8297-25ad7c456303] Running
	I0116 03:20:05.356367 1011681 system_pods.go:89] "kube-scheduler-old-k8s-version-788237" [738a1d18-b8a8-429e-b335-a542be90f4db] Running
	I0116 03:20:05.356379 1011681 system_pods.go:89] "metrics-server-74d5856cc6-tx8jt" [790d860a-d27f-4535-94d8-64f40cb79071] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:20:05.356392 1011681 system_pods.go:89] "storage-provisioner" [aa5fac91-1606-4716-a04a-18e9d80c926b] Running
	I0116 03:20:05.356405 1011681 system_pods.go:126] duration metric: took 1m4.991757131s to wait for k8s-apps to be running ...
	I0116 03:20:05.356417 1011681 system_svc.go:44] waiting for kubelet service to be running ....
	I0116 03:20:05.356484 1011681 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 03:20:05.373421 1011681 system_svc.go:56] duration metric: took 16.991793ms WaitForService to wait for kubelet.
	I0116 03:20:05.373453 1011681 kubeadm.go:581] duration metric: took 1m11.178099498s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0116 03:20:05.373474 1011681 node_conditions.go:102] verifying NodePressure condition ...
	I0116 03:20:05.377261 1011681 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 03:20:05.377289 1011681 node_conditions.go:123] node cpu capacity is 2
	I0116 03:20:05.377303 1011681 node_conditions.go:105] duration metric: took 3.824619ms to run NodePressure ...
	I0116 03:20:05.377315 1011681 start.go:228] waiting for startup goroutines ...
	I0116 03:20:05.377324 1011681 start.go:233] waiting for cluster config update ...
	I0116 03:20:05.377340 1011681 start.go:242] writing updated cluster config ...
	I0116 03:20:05.377691 1011681 ssh_runner.go:195] Run: rm -f paused
	I0116 03:20:05.433407 1011681 start.go:600] kubectl: 1.29.0, cluster: 1.16.0 (minor skew: 13)
	I0116 03:20:05.435544 1011681 out.go:177] 
	W0116 03:20:05.437104 1011681 out.go:239] ! /usr/local/bin/kubectl is version 1.29.0, which may have incompatibilities with Kubernetes 1.16.0.
	I0116 03:20:05.438355 1011681 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I0116 03:20:05.439604 1011681 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-788237" cluster and "default" namespace by default
	I0116 03:20:00.665894 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:03.166003 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:03.046349 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:05.047570 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:05.669899 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:08.165604 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:07.545964 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:10.045541 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:10.665401 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:12.666068 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:15.165456 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:12.545270 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:15.044498 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:17.044757 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:17.664970 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:20.170600 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:19.045718 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:21.545760 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:22.665734 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:24.666166 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:24.046926 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:26.545103 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:26.666505 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:29.166514 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:28.545929 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:31.048171 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:31.166637 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:33.665953 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:33.548606 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:35.561699 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:35.666414 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:38.165516 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:38.045658 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:40.544791 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:40.667352 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:43.165494 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:45.166150 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:42.545935 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:45.045849 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:47.667601 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:49.667904 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:47.546691 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:50.044945 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:52.046574 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:52.165607 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:54.666005 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:54.544893 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:57.048203 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:56.666062 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:58.666122 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:20:59.546941 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:01.547326 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:00.675116 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:03.165630 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:05.165989 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:04.045454 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:06.545774 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:07.665616 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:10.165283 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:09.045454 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:11.544234 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:12.166050 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:14.665663 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:13.546119 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:16.044940 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:16.666322 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:18.666577 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:18.545883 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:21.045761 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:21.165313 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:23.166487 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:23.543371 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:25.545045 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:25.666044 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:27.666372 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:30.166224 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:28.046020 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:30.545380 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:32.664709 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:34.665743 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:32.548394 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:35.044140 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:37.045266 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:36.666094 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:39.166598 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:39.544754 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:41.545120 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:41.665435 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:44.177500 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:44.046063 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:46.545258 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:46.665179 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:48.665479 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:49.045153 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:51.544430 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:50.665798 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:52.668246 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:55.164905 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:53.545067 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:55.548667 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:57.664986 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:00.166610 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:21:58.044255 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:00.046558 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:02.664972 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:04.665647 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:02.547522 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:05.045464 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:07.049814 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:07.165053 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:09.166438 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:09.545216 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:11.546990 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:11.166827 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:13.664900 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:13.547322 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:16.046930 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:15.667462 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:18.165667 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:20.167440 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:18.544902 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:20.545091 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:22.167972 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:24.665473 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:23.046783 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:25.546772 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:26.665601 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:28.667378 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:27.552093 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:30.045665 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:32.046723 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:31.166653 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:33.169992 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:34.546495 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:36.552400 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:35.667041 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:38.166719 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:39.045530 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:41.046225 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:40.664638 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:42.664974 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:45.167738 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:43.545469 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:46.045132 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:47.665457 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:50.165843 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:48.045266 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:50.544748 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:52.166892 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:54.170375 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:52.545596 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:54.546876 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:57.048120 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:56.664513 1011955 pod_ready.go:102] pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:57.165325 1011955 pod_ready.go:81] duration metric: took 4m0.008324579s waiting for pod "metrics-server-57f55c9bc5-928d7" in "kube-system" namespace to be "Ready" ...
	E0116 03:22:57.165356 1011955 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0116 03:22:57.165370 1011955 pod_ready.go:38] duration metric: took 4m1.181991459s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:22:57.165388 1011955 api_server.go:52] waiting for apiserver process to appear ...
	I0116 03:22:57.165528 1011955 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0116 03:22:57.165670 1011955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0116 03:22:57.223487 1011955 cri.go:89] found id: "94ed68f3d4f24313e6a1fce6e4b90b946fd2225aac302fced35d46a9deda6a24"
	I0116 03:22:57.223515 1011955 cri.go:89] found id: ""
	I0116 03:22:57.223523 1011955 logs.go:284] 1 containers: [94ed68f3d4f24313e6a1fce6e4b90b946fd2225aac302fced35d46a9deda6a24]
	I0116 03:22:57.223579 1011955 ssh_runner.go:195] Run: which crictl
	I0116 03:22:57.228506 1011955 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0116 03:22:57.228603 1011955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0116 03:22:57.275655 1011955 cri.go:89] found id: "c4fca60077d67415eed36ea7fec522e468538059f422909136109bb3049be2c8"
	I0116 03:22:57.275681 1011955 cri.go:89] found id: ""
	I0116 03:22:57.275689 1011955 logs.go:284] 1 containers: [c4fca60077d67415eed36ea7fec522e468538059f422909136109bb3049be2c8]
	I0116 03:22:57.275747 1011955 ssh_runner.go:195] Run: which crictl
	I0116 03:22:57.280168 1011955 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0116 03:22:57.280248 1011955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0116 03:22:57.325379 1011955 cri.go:89] found id: "8c87760cc0f44eb99f0cb2d610478aa2e69f84449726d201a30521127a679760"
	I0116 03:22:57.325403 1011955 cri.go:89] found id: ""
	I0116 03:22:57.325412 1011955 logs.go:284] 1 containers: [8c87760cc0f44eb99f0cb2d610478aa2e69f84449726d201a30521127a679760]
	I0116 03:22:57.325485 1011955 ssh_runner.go:195] Run: which crictl
	I0116 03:22:57.330376 1011955 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0116 03:22:57.330456 1011955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0116 03:22:57.374600 1011955 cri.go:89] found id: "19ca9f9fb82670ac732e41c9fb8af63d1fad24065000c55e0a0eb9ddae08738e"
	I0116 03:22:57.374633 1011955 cri.go:89] found id: ""
	I0116 03:22:57.374644 1011955 logs.go:284] 1 containers: [19ca9f9fb82670ac732e41c9fb8af63d1fad24065000c55e0a0eb9ddae08738e]
	I0116 03:22:57.374731 1011955 ssh_runner.go:195] Run: which crictl
	I0116 03:22:57.379908 1011955 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0116 03:22:57.379996 1011955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0116 03:22:57.422495 1011955 cri.go:89] found id: "cd75d2109b8821b8657f2dd6c4f2c4ca736d012c15668e44c34657f674fd675f"
	I0116 03:22:57.422524 1011955 cri.go:89] found id: ""
	I0116 03:22:57.422535 1011955 logs.go:284] 1 containers: [cd75d2109b8821b8657f2dd6c4f2c4ca736d012c15668e44c34657f674fd675f]
	I0116 03:22:57.422599 1011955 ssh_runner.go:195] Run: which crictl
	I0116 03:22:57.427327 1011955 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0116 03:22:57.427398 1011955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0116 03:22:57.472666 1011955 cri.go:89] found id: "7cde9c38c1e733d3e9e470f1180254a447d2686e0139615245efc3f8e8a06e45"
	I0116 03:22:57.472698 1011955 cri.go:89] found id: ""
	I0116 03:22:57.472715 1011955 logs.go:284] 1 containers: [7cde9c38c1e733d3e9e470f1180254a447d2686e0139615245efc3f8e8a06e45]
	I0116 03:22:57.472773 1011955 ssh_runner.go:195] Run: which crictl
	I0116 03:22:57.477425 1011955 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0116 03:22:57.477487 1011955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0116 03:22:57.519963 1011955 cri.go:89] found id: ""
	I0116 03:22:57.519998 1011955 logs.go:284] 0 containers: []
	W0116 03:22:57.520008 1011955 logs.go:286] No container was found matching "kindnet"
	I0116 03:22:57.520018 1011955 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0116 03:22:57.520082 1011955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0116 03:22:57.563323 1011955 cri.go:89] found id: "f2b31947cd9ab2c8bd5934451dd476eedb4489db897be097f0a19bf23818e9da"
	I0116 03:22:57.563351 1011955 cri.go:89] found id: ""
	I0116 03:22:57.563361 1011955 logs.go:284] 1 containers: [f2b31947cd9ab2c8bd5934451dd476eedb4489db897be097f0a19bf23818e9da]
	I0116 03:22:57.563429 1011955 ssh_runner.go:195] Run: which crictl
	I0116 03:22:57.567849 1011955 logs.go:123] Gathering logs for kube-controller-manager [7cde9c38c1e733d3e9e470f1180254a447d2686e0139615245efc3f8e8a06e45] ...
	I0116 03:22:57.567885 1011955 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7cde9c38c1e733d3e9e470f1180254a447d2686e0139615245efc3f8e8a06e45"
	I0116 03:22:57.630746 1011955 logs.go:123] Gathering logs for storage-provisioner [f2b31947cd9ab2c8bd5934451dd476eedb4489db897be097f0a19bf23818e9da] ...
	I0116 03:22:57.630790 1011955 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f2b31947cd9ab2c8bd5934451dd476eedb4489db897be097f0a19bf23818e9da"
	I0116 03:22:57.685136 1011955 logs.go:123] Gathering logs for container status ...
	I0116 03:22:57.685175 1011955 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0116 03:22:57.744223 1011955 logs.go:123] Gathering logs for dmesg ...
	I0116 03:22:57.744253 1011955 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0116 03:22:57.758357 1011955 logs.go:123] Gathering logs for describe nodes ...
	I0116 03:22:57.758386 1011955 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0116 03:22:57.921587 1011955 logs.go:123] Gathering logs for kube-apiserver [94ed68f3d4f24313e6a1fce6e4b90b946fd2225aac302fced35d46a9deda6a24] ...
	I0116 03:22:57.921631 1011955 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 94ed68f3d4f24313e6a1fce6e4b90b946fd2225aac302fced35d46a9deda6a24"
	I0116 03:22:57.981922 1011955 logs.go:123] Gathering logs for etcd [c4fca60077d67415eed36ea7fec522e468538059f422909136109bb3049be2c8] ...
	I0116 03:22:57.981959 1011955 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c4fca60077d67415eed36ea7fec522e468538059f422909136109bb3049be2c8"
	I0116 03:22:58.036701 1011955 logs.go:123] Gathering logs for kube-proxy [cd75d2109b8821b8657f2dd6c4f2c4ca736d012c15668e44c34657f674fd675f] ...
	I0116 03:22:58.036735 1011955 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cd75d2109b8821b8657f2dd6c4f2c4ca736d012c15668e44c34657f674fd675f"
	I0116 03:22:58.078332 1011955 logs.go:123] Gathering logs for kubelet ...
	I0116 03:22:58.078366 1011955 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0116 03:22:58.163271 1011955 logs.go:138] Found kubelet problem: Jan 16 03:18:52 default-k8s-diff-port-775571 kubelet[3872]: W0116 03:18:52.493348    3872 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-775571" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-775571' and this object
	W0116 03:22:58.163463 1011955 logs.go:138] Found kubelet problem: Jan 16 03:18:52 default-k8s-diff-port-775571 kubelet[3872]: E0116 03:18:52.493417    3872 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-775571" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-775571' and this object
	I0116 03:22:58.186700 1011955 logs.go:123] Gathering logs for coredns [8c87760cc0f44eb99f0cb2d610478aa2e69f84449726d201a30521127a679760] ...
	I0116 03:22:58.186740 1011955 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8c87760cc0f44eb99f0cb2d610478aa2e69f84449726d201a30521127a679760"
	I0116 03:22:58.230943 1011955 logs.go:123] Gathering logs for kube-scheduler [19ca9f9fb82670ac732e41c9fb8af63d1fad24065000c55e0a0eb9ddae08738e] ...
	I0116 03:22:58.230987 1011955 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 19ca9f9fb82670ac732e41c9fb8af63d1fad24065000c55e0a0eb9ddae08738e"
	I0116 03:22:58.284787 1011955 logs.go:123] Gathering logs for CRI-O ...
	I0116 03:22:58.284826 1011955 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0116 03:22:58.711979 1011955 out.go:309] Setting ErrFile to fd 2...
	I0116 03:22:58.712020 1011955 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0116 03:22:58.712201 1011955 out.go:239] X Problems detected in kubelet:
	W0116 03:22:58.712218 1011955 out.go:239]   Jan 16 03:18:52 default-k8s-diff-port-775571 kubelet[3872]: W0116 03:18:52.493348    3872 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-775571" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-775571' and this object
	W0116 03:22:58.712232 1011955 out.go:239]   Jan 16 03:18:52 default-k8s-diff-port-775571 kubelet[3872]: E0116 03:18:52.493417    3872 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-775571" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-775571' and this object
	I0116 03:22:58.712247 1011955 out.go:309] Setting ErrFile to fd 2...
	I0116 03:22:58.712259 1011955 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 03:22:59.550035 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:23:02.045996 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:23:04.049349 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:23:06.545441 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:23:08.713432 1011955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:23:08.730913 1011955 api_server.go:72] duration metric: took 4m15.560433909s to wait for apiserver process to appear ...
	I0116 03:23:08.730953 1011955 api_server.go:88] waiting for apiserver healthz status ...
	I0116 03:23:08.731009 1011955 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0116 03:23:08.731083 1011955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0116 03:23:08.781386 1011955 cri.go:89] found id: "94ed68f3d4f24313e6a1fce6e4b90b946fd2225aac302fced35d46a9deda6a24"
	I0116 03:23:08.781415 1011955 cri.go:89] found id: ""
	I0116 03:23:08.781425 1011955 logs.go:284] 1 containers: [94ed68f3d4f24313e6a1fce6e4b90b946fd2225aac302fced35d46a9deda6a24]
	I0116 03:23:08.781487 1011955 ssh_runner.go:195] Run: which crictl
	I0116 03:23:08.787261 1011955 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0116 03:23:08.787341 1011955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0116 03:23:08.840893 1011955 cri.go:89] found id: "c4fca60077d67415eed36ea7fec522e468538059f422909136109bb3049be2c8"
	I0116 03:23:08.840929 1011955 cri.go:89] found id: ""
	I0116 03:23:08.840940 1011955 logs.go:284] 1 containers: [c4fca60077d67415eed36ea7fec522e468538059f422909136109bb3049be2c8]
	I0116 03:23:08.840996 1011955 ssh_runner.go:195] Run: which crictl
	I0116 03:23:08.846278 1011955 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0116 03:23:08.846350 1011955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0116 03:23:08.894119 1011955 cri.go:89] found id: "8c87760cc0f44eb99f0cb2d610478aa2e69f84449726d201a30521127a679760"
	I0116 03:23:08.894141 1011955 cri.go:89] found id: ""
	I0116 03:23:08.894149 1011955 logs.go:284] 1 containers: [8c87760cc0f44eb99f0cb2d610478aa2e69f84449726d201a30521127a679760]
	I0116 03:23:08.894204 1011955 ssh_runner.go:195] Run: which crictl
	I0116 03:23:08.899019 1011955 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0116 03:23:08.899088 1011955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0116 03:23:08.944579 1011955 cri.go:89] found id: "19ca9f9fb82670ac732e41c9fb8af63d1fad24065000c55e0a0eb9ddae08738e"
	I0116 03:23:08.944607 1011955 cri.go:89] found id: ""
	I0116 03:23:08.944616 1011955 logs.go:284] 1 containers: [19ca9f9fb82670ac732e41c9fb8af63d1fad24065000c55e0a0eb9ddae08738e]
	I0116 03:23:08.944689 1011955 ssh_runner.go:195] Run: which crictl
	I0116 03:23:08.948828 1011955 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0116 03:23:08.948907 1011955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0116 03:23:08.997870 1011955 cri.go:89] found id: "cd75d2109b8821b8657f2dd6c4f2c4ca736d012c15668e44c34657f674fd675f"
	I0116 03:23:08.997904 1011955 cri.go:89] found id: ""
	I0116 03:23:08.997916 1011955 logs.go:284] 1 containers: [cd75d2109b8821b8657f2dd6c4f2c4ca736d012c15668e44c34657f674fd675f]
	I0116 03:23:08.997987 1011955 ssh_runner.go:195] Run: which crictl
	I0116 03:23:09.002335 1011955 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0116 03:23:09.002420 1011955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0116 03:23:09.042381 1011955 cri.go:89] found id: "7cde9c38c1e733d3e9e470f1180254a447d2686e0139615245efc3f8e8a06e45"
	I0116 03:23:09.042408 1011955 cri.go:89] found id: ""
	I0116 03:23:09.042417 1011955 logs.go:284] 1 containers: [7cde9c38c1e733d3e9e470f1180254a447d2686e0139615245efc3f8e8a06e45]
	I0116 03:23:09.042481 1011955 ssh_runner.go:195] Run: which crictl
	I0116 03:23:09.047097 1011955 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0116 03:23:09.047180 1011955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0116 03:23:09.093592 1011955 cri.go:89] found id: ""
	I0116 03:23:09.093628 1011955 logs.go:284] 0 containers: []
	W0116 03:23:09.093639 1011955 logs.go:286] No container was found matching "kindnet"
	I0116 03:23:09.093648 1011955 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0116 03:23:09.093730 1011955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0116 03:23:09.142839 1011955 cri.go:89] found id: "f2b31947cd9ab2c8bd5934451dd476eedb4489db897be097f0a19bf23818e9da"
	I0116 03:23:09.142868 1011955 cri.go:89] found id: ""
	I0116 03:23:09.142878 1011955 logs.go:284] 1 containers: [f2b31947cd9ab2c8bd5934451dd476eedb4489db897be097f0a19bf23818e9da]
	I0116 03:23:09.142950 1011955 ssh_runner.go:195] Run: which crictl
	I0116 03:23:09.146997 1011955 logs.go:123] Gathering logs for CRI-O ...
	I0116 03:23:09.147032 1011955 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0116 03:23:09.550608 1011955 logs.go:123] Gathering logs for kubelet ...
	I0116 03:23:09.550654 1011955 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0116 03:23:09.637527 1011955 logs.go:138] Found kubelet problem: Jan 16 03:18:52 default-k8s-diff-port-775571 kubelet[3872]: W0116 03:18:52.493348    3872 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-775571" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-775571' and this object
	W0116 03:23:09.637714 1011955 logs.go:138] Found kubelet problem: Jan 16 03:18:52 default-k8s-diff-port-775571 kubelet[3872]: E0116 03:18:52.493417    3872 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-775571" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-775571' and this object
	I0116 03:23:09.660631 1011955 logs.go:123] Gathering logs for etcd [c4fca60077d67415eed36ea7fec522e468538059f422909136109bb3049be2c8] ...
	I0116 03:23:09.660676 1011955 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c4fca60077d67415eed36ea7fec522e468538059f422909136109bb3049be2c8"
	I0116 03:23:09.715818 1011955 logs.go:123] Gathering logs for kube-scheduler [19ca9f9fb82670ac732e41c9fb8af63d1fad24065000c55e0a0eb9ddae08738e] ...
	I0116 03:23:09.715860 1011955 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 19ca9f9fb82670ac732e41c9fb8af63d1fad24065000c55e0a0eb9ddae08738e"
	I0116 03:23:09.770445 1011955 logs.go:123] Gathering logs for coredns [8c87760cc0f44eb99f0cb2d610478aa2e69f84449726d201a30521127a679760] ...
	I0116 03:23:09.770487 1011955 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8c87760cc0f44eb99f0cb2d610478aa2e69f84449726d201a30521127a679760"
	I0116 03:23:09.817598 1011955 logs.go:123] Gathering logs for kube-proxy [cd75d2109b8821b8657f2dd6c4f2c4ca736d012c15668e44c34657f674fd675f] ...
	I0116 03:23:09.817640 1011955 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cd75d2109b8821b8657f2dd6c4f2c4ca736d012c15668e44c34657f674fd675f"
	I0116 03:23:09.866233 1011955 logs.go:123] Gathering logs for kube-controller-manager [7cde9c38c1e733d3e9e470f1180254a447d2686e0139615245efc3f8e8a06e45] ...
	I0116 03:23:09.866276 1011955 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7cde9c38c1e733d3e9e470f1180254a447d2686e0139615245efc3f8e8a06e45"
	I0116 03:23:09.929526 1011955 logs.go:123] Gathering logs for storage-provisioner [f2b31947cd9ab2c8bd5934451dd476eedb4489db897be097f0a19bf23818e9da] ...
	I0116 03:23:09.929564 1011955 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f2b31947cd9ab2c8bd5934451dd476eedb4489db897be097f0a19bf23818e9da"
	I0116 03:23:09.971573 1011955 logs.go:123] Gathering logs for container status ...
	I0116 03:23:09.971603 1011955 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0116 03:23:10.023976 1011955 logs.go:123] Gathering logs for dmesg ...
	I0116 03:23:10.024008 1011955 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0116 03:23:10.042100 1011955 logs.go:123] Gathering logs for describe nodes ...
	I0116 03:23:10.042140 1011955 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0116 03:23:10.197828 1011955 logs.go:123] Gathering logs for kube-apiserver [94ed68f3d4f24313e6a1fce6e4b90b946fd2225aac302fced35d46a9deda6a24] ...
	I0116 03:23:10.197867 1011955 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 94ed68f3d4f24313e6a1fce6e4b90b946fd2225aac302fced35d46a9deda6a24"
	I0116 03:23:10.248743 1011955 out.go:309] Setting ErrFile to fd 2...
	I0116 03:23:10.248783 1011955 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0116 03:23:10.248869 1011955 out.go:239] X Problems detected in kubelet:
	W0116 03:23:10.248882 1011955 out.go:239]   Jan 16 03:18:52 default-k8s-diff-port-775571 kubelet[3872]: W0116 03:18:52.493348    3872 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-775571" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-775571' and this object
	W0116 03:23:10.248900 1011955 out.go:239]   Jan 16 03:18:52 default-k8s-diff-port-775571 kubelet[3872]: E0116 03:18:52.493417    3872 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-775571" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-775571' and this object
	I0116 03:23:10.248913 1011955 out.go:309] Setting ErrFile to fd 2...
	I0116 03:23:10.248919 1011955 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 03:23:08.545744 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:23:11.045197 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:23:13.047444 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:23:15.544949 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:23:20.249250 1011955 api_server.go:253] Checking apiserver healthz at https://192.168.72.158:8444/healthz ...
	I0116 03:23:20.255958 1011955 api_server.go:279] https://192.168.72.158:8444/healthz returned 200:
	ok
	I0116 03:23:20.257425 1011955 api_server.go:141] control plane version: v1.28.4
	I0116 03:23:20.257457 1011955 api_server.go:131] duration metric: took 11.526494801s to wait for apiserver health ...
	I0116 03:23:20.257467 1011955 system_pods.go:43] waiting for kube-system pods to appear ...
	I0116 03:23:20.257504 1011955 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0116 03:23:20.257572 1011955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0116 03:23:20.304303 1011955 cri.go:89] found id: "94ed68f3d4f24313e6a1fce6e4b90b946fd2225aac302fced35d46a9deda6a24"
	I0116 03:23:20.304331 1011955 cri.go:89] found id: ""
	I0116 03:23:20.304342 1011955 logs.go:284] 1 containers: [94ed68f3d4f24313e6a1fce6e4b90b946fd2225aac302fced35d46a9deda6a24]
	I0116 03:23:20.304410 1011955 ssh_runner.go:195] Run: which crictl
	I0116 03:23:20.309509 1011955 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0116 03:23:20.309599 1011955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0116 03:23:20.353692 1011955 cri.go:89] found id: "c4fca60077d67415eed36ea7fec522e468538059f422909136109bb3049be2c8"
	I0116 03:23:20.353721 1011955 cri.go:89] found id: ""
	I0116 03:23:20.353731 1011955 logs.go:284] 1 containers: [c4fca60077d67415eed36ea7fec522e468538059f422909136109bb3049be2c8]
	I0116 03:23:20.353816 1011955 ssh_runner.go:195] Run: which crictl
	I0116 03:23:20.358894 1011955 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0116 03:23:20.358978 1011955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0116 03:23:20.409337 1011955 cri.go:89] found id: "8c87760cc0f44eb99f0cb2d610478aa2e69f84449726d201a30521127a679760"
	I0116 03:23:20.409364 1011955 cri.go:89] found id: ""
	I0116 03:23:20.409388 1011955 logs.go:284] 1 containers: [8c87760cc0f44eb99f0cb2d610478aa2e69f84449726d201a30521127a679760]
	I0116 03:23:20.409462 1011955 ssh_runner.go:195] Run: which crictl
	I0116 03:23:20.414337 1011955 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0116 03:23:20.414422 1011955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0116 03:23:20.458585 1011955 cri.go:89] found id: "19ca9f9fb82670ac732e41c9fb8af63d1fad24065000c55e0a0eb9ddae08738e"
	I0116 03:23:20.458613 1011955 cri.go:89] found id: ""
	I0116 03:23:20.458621 1011955 logs.go:284] 1 containers: [19ca9f9fb82670ac732e41c9fb8af63d1fad24065000c55e0a0eb9ddae08738e]
	I0116 03:23:20.458688 1011955 ssh_runner.go:195] Run: which crictl
	I0116 03:23:20.463813 1011955 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0116 03:23:20.463899 1011955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0116 03:23:20.514696 1011955 cri.go:89] found id: "cd75d2109b8821b8657f2dd6c4f2c4ca736d012c15668e44c34657f674fd675f"
	I0116 03:23:20.514729 1011955 cri.go:89] found id: ""
	I0116 03:23:20.514740 1011955 logs.go:284] 1 containers: [cd75d2109b8821b8657f2dd6c4f2c4ca736d012c15668e44c34657f674fd675f]
	I0116 03:23:20.514813 1011955 ssh_runner.go:195] Run: which crictl
	I0116 03:23:20.520195 1011955 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0116 03:23:20.520289 1011955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0116 03:23:17.546020 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:23:19.546663 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:23:22.046331 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:23:20.563280 1011955 cri.go:89] found id: "7cde9c38c1e733d3e9e470f1180254a447d2686e0139615245efc3f8e8a06e45"
	I0116 03:23:20.563313 1011955 cri.go:89] found id: ""
	I0116 03:23:20.563325 1011955 logs.go:284] 1 containers: [7cde9c38c1e733d3e9e470f1180254a447d2686e0139615245efc3f8e8a06e45]
	I0116 03:23:20.563392 1011955 ssh_runner.go:195] Run: which crictl
	I0116 03:23:20.572063 1011955 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0116 03:23:20.572143 1011955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0116 03:23:20.610050 1011955 cri.go:89] found id: ""
	I0116 03:23:20.610078 1011955 logs.go:284] 0 containers: []
	W0116 03:23:20.610087 1011955 logs.go:286] No container was found matching "kindnet"
	I0116 03:23:20.610093 1011955 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0116 03:23:20.610149 1011955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0116 03:23:20.651475 1011955 cri.go:89] found id: "f2b31947cd9ab2c8bd5934451dd476eedb4489db897be097f0a19bf23818e9da"
	I0116 03:23:20.651499 1011955 cri.go:89] found id: ""
	I0116 03:23:20.651509 1011955 logs.go:284] 1 containers: [f2b31947cd9ab2c8bd5934451dd476eedb4489db897be097f0a19bf23818e9da]
	I0116 03:23:20.651575 1011955 ssh_runner.go:195] Run: which crictl
	I0116 03:23:20.656379 1011955 logs.go:123] Gathering logs for etcd [c4fca60077d67415eed36ea7fec522e468538059f422909136109bb3049be2c8] ...
	I0116 03:23:20.656405 1011955 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c4fca60077d67415eed36ea7fec522e468538059f422909136109bb3049be2c8"
	I0116 03:23:20.706726 1011955 logs.go:123] Gathering logs for kube-scheduler [19ca9f9fb82670ac732e41c9fb8af63d1fad24065000c55e0a0eb9ddae08738e] ...
	I0116 03:23:20.706762 1011955 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 19ca9f9fb82670ac732e41c9fb8af63d1fad24065000c55e0a0eb9ddae08738e"
	I0116 03:23:20.755434 1011955 logs.go:123] Gathering logs for storage-provisioner [f2b31947cd9ab2c8bd5934451dd476eedb4489db897be097f0a19bf23818e9da] ...
	I0116 03:23:20.755472 1011955 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f2b31947cd9ab2c8bd5934451dd476eedb4489db897be097f0a19bf23818e9da"
	I0116 03:23:20.796611 1011955 logs.go:123] Gathering logs for kubelet ...
	I0116 03:23:20.796649 1011955 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0116 03:23:20.888886 1011955 logs.go:138] Found kubelet problem: Jan 16 03:18:52 default-k8s-diff-port-775571 kubelet[3872]: W0116 03:18:52.493348    3872 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-775571" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-775571' and this object
	W0116 03:23:20.889106 1011955 logs.go:138] Found kubelet problem: Jan 16 03:18:52 default-k8s-diff-port-775571 kubelet[3872]: E0116 03:18:52.493417    3872 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-775571" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-775571' and this object
	I0116 03:23:20.915624 1011955 logs.go:123] Gathering logs for describe nodes ...
	I0116 03:23:20.915668 1011955 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0116 03:23:21.069499 1011955 logs.go:123] Gathering logs for kube-apiserver [94ed68f3d4f24313e6a1fce6e4b90b946fd2225aac302fced35d46a9deda6a24] ...
	I0116 03:23:21.069544 1011955 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 94ed68f3d4f24313e6a1fce6e4b90b946fd2225aac302fced35d46a9deda6a24"
	I0116 03:23:21.128642 1011955 logs.go:123] Gathering logs for kube-controller-manager [7cde9c38c1e733d3e9e470f1180254a447d2686e0139615245efc3f8e8a06e45] ...
	I0116 03:23:21.128686 1011955 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7cde9c38c1e733d3e9e470f1180254a447d2686e0139615245efc3f8e8a06e45"
	I0116 03:23:21.186151 1011955 logs.go:123] Gathering logs for CRI-O ...
	I0116 03:23:21.186204 1011955 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0116 03:23:21.586722 1011955 logs.go:123] Gathering logs for container status ...
	I0116 03:23:21.586769 1011955 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0116 03:23:21.642253 1011955 logs.go:123] Gathering logs for dmesg ...
	I0116 03:23:21.642301 1011955 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0116 03:23:21.658076 1011955 logs.go:123] Gathering logs for coredns [8c87760cc0f44eb99f0cb2d610478aa2e69f84449726d201a30521127a679760] ...
	I0116 03:23:21.658108 1011955 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8c87760cc0f44eb99f0cb2d610478aa2e69f84449726d201a30521127a679760"
	I0116 03:23:21.712191 1011955 logs.go:123] Gathering logs for kube-proxy [cd75d2109b8821b8657f2dd6c4f2c4ca736d012c15668e44c34657f674fd675f] ...
	I0116 03:23:21.712229 1011955 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cd75d2109b8821b8657f2dd6c4f2c4ca736d012c15668e44c34657f674fd675f"
	I0116 03:23:21.763632 1011955 out.go:309] Setting ErrFile to fd 2...
	I0116 03:23:21.763672 1011955 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0116 03:23:21.763767 1011955 out.go:239] X Problems detected in kubelet:
	W0116 03:23:21.763792 1011955 out.go:239]   Jan 16 03:18:52 default-k8s-diff-port-775571 kubelet[3872]: W0116 03:18:52.493348    3872 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-775571" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-775571' and this object
	W0116 03:23:21.763804 1011955 out.go:239]   Jan 16 03:18:52 default-k8s-diff-port-775571 kubelet[3872]: E0116 03:18:52.493417    3872 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-775571" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-775571' and this object
	I0116 03:23:21.763816 1011955 out.go:309] Setting ErrFile to fd 2...
	I0116 03:23:21.763826 1011955 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 03:23:24.046962 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:23:26.544587 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:23:31.774617 1011955 system_pods.go:59] 8 kube-system pods found
	I0116 03:23:31.774653 1011955 system_pods.go:61] "coredns-5dd5756b68-mk795" [b928a6ae-07af-4bc4-a0c5-b3027730459c] Running
	I0116 03:23:31.774660 1011955 system_pods.go:61] "etcd-default-k8s-diff-port-775571" [1ec6d1b7-1c79-436f-bc2c-7f25d7b35d40] Running
	I0116 03:23:31.774664 1011955 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-775571" [0085c55b-c122-41dc-ab1b-e1110606563d] Running
	I0116 03:23:31.774670 1011955 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-775571" [57f644e6-74c4-4de5-a725-5dc2e049a78a] Running
	I0116 03:23:31.774677 1011955 system_pods.go:61] "kube-proxy-zw495" [d2f3b2b8-ba3f-49d0-b12b-9e78c5867c09] Running
	I0116 03:23:31.774683 1011955 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-775571" [8b024142-545b-46c1-babc-f0a544d2debc] Running
	I0116 03:23:31.774694 1011955 system_pods.go:61] "metrics-server-57f55c9bc5-928d7" [d3671063-27a1-4ad8-9f5f-b3e01205f483] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:23:31.774709 1011955 system_pods.go:61] "storage-provisioner" [8c309131-3f2c-411d-9876-05424a2c3b79] Running
	I0116 03:23:31.774720 1011955 system_pods.go:74] duration metric: took 11.517244217s to wait for pod list to return data ...
	I0116 03:23:31.774733 1011955 default_sa.go:34] waiting for default service account to be created ...
	I0116 03:23:31.777691 1011955 default_sa.go:45] found service account: "default"
	I0116 03:23:31.777717 1011955 default_sa.go:55] duration metric: took 2.971824ms for default service account to be created ...
	I0116 03:23:31.777725 1011955 system_pods.go:116] waiting for k8s-apps to be running ...
	I0116 03:23:31.784992 1011955 system_pods.go:86] 8 kube-system pods found
	I0116 03:23:31.785020 1011955 system_pods.go:89] "coredns-5dd5756b68-mk795" [b928a6ae-07af-4bc4-a0c5-b3027730459c] Running
	I0116 03:23:31.785027 1011955 system_pods.go:89] "etcd-default-k8s-diff-port-775571" [1ec6d1b7-1c79-436f-bc2c-7f25d7b35d40] Running
	I0116 03:23:31.785032 1011955 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-775571" [0085c55b-c122-41dc-ab1b-e1110606563d] Running
	I0116 03:23:31.785036 1011955 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-775571" [57f644e6-74c4-4de5-a725-5dc2e049a78a] Running
	I0116 03:23:31.785041 1011955 system_pods.go:89] "kube-proxy-zw495" [d2f3b2b8-ba3f-49d0-b12b-9e78c5867c09] Running
	I0116 03:23:31.785045 1011955 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-775571" [8b024142-545b-46c1-babc-f0a544d2debc] Running
	I0116 03:23:31.785053 1011955 system_pods.go:89] "metrics-server-57f55c9bc5-928d7" [d3671063-27a1-4ad8-9f5f-b3e01205f483] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:23:31.785058 1011955 system_pods.go:89] "storage-provisioner" [8c309131-3f2c-411d-9876-05424a2c3b79] Running
	I0116 03:23:31.785066 1011955 system_pods.go:126] duration metric: took 7.335258ms to wait for k8s-apps to be running ...
	I0116 03:23:31.785075 1011955 system_svc.go:44] waiting for kubelet service to be running ....
	I0116 03:23:31.785125 1011955 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 03:23:31.801767 1011955 system_svc.go:56] duration metric: took 16.666559ms WaitForService to wait for kubelet.
	I0116 03:23:31.801797 1011955 kubeadm.go:581] duration metric: took 4m38.631327454s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0116 03:23:31.801841 1011955 node_conditions.go:102] verifying NodePressure condition ...
	I0116 03:23:31.805655 1011955 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 03:23:31.805721 1011955 node_conditions.go:123] node cpu capacity is 2
	I0116 03:23:31.805773 1011955 node_conditions.go:105] duration metric: took 3.924567ms to run NodePressure ...
	I0116 03:23:31.805791 1011955 start.go:228] waiting for startup goroutines ...
	I0116 03:23:31.805822 1011955 start.go:233] waiting for cluster config update ...
	I0116 03:23:31.805842 1011955 start.go:242] writing updated cluster config ...
	I0116 03:23:31.806160 1011955 ssh_runner.go:195] Run: rm -f paused
	I0116 03:23:31.863603 1011955 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I0116 03:23:31.865992 1011955 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-775571" cluster and "default" namespace by default
	I0116 03:23:28.545733 1011460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:23:30.051002 1011460 pod_ready.go:81] duration metric: took 4m0.013925231s waiting for pod "metrics-server-57f55c9bc5-6w2t7" in "kube-system" namespace to be "Ready" ...
	E0116 03:23:30.051029 1011460 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0116 03:23:30.051040 1011460 pod_ready.go:38] duration metric: took 4m3.438310266s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:23:30.051073 1011460 api_server.go:52] waiting for apiserver process to appear ...
	I0116 03:23:30.051111 1011460 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0116 03:23:30.051173 1011460 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0116 03:23:30.118195 1011460 cri.go:89] found id: "f2403bf8a85e789f4307c4ad0dc429903a20a496b9a9e4ee152f4fb45edeaf5e"
	I0116 03:23:30.118230 1011460 cri.go:89] found id: ""
	I0116 03:23:30.118241 1011460 logs.go:284] 1 containers: [f2403bf8a85e789f4307c4ad0dc429903a20a496b9a9e4ee152f4fb45edeaf5e]
	I0116 03:23:30.118325 1011460 ssh_runner.go:195] Run: which crictl
	I0116 03:23:30.124760 1011460 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0116 03:23:30.124844 1011460 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0116 03:23:30.193482 1011460 cri.go:89] found id: "2abc1d37662ffc8c141014651481ef2b375ac1c303eff20d0fcb56cd604a2f8f"
	I0116 03:23:30.193512 1011460 cri.go:89] found id: ""
	I0116 03:23:30.193522 1011460 logs.go:284] 1 containers: [2abc1d37662ffc8c141014651481ef2b375ac1c303eff20d0fcb56cd604a2f8f]
	I0116 03:23:30.193586 1011460 ssh_runner.go:195] Run: which crictl
	I0116 03:23:30.201066 1011460 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0116 03:23:30.201155 1011460 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0116 03:23:30.265943 1011460 cri.go:89] found id: "229310b5851cfb28499024703302e9d76a32a5a7dd165d38163bd4a7e4457058"
	I0116 03:23:30.265979 1011460 cri.go:89] found id: ""
	I0116 03:23:30.265991 1011460 logs.go:284] 1 containers: [229310b5851cfb28499024703302e9d76a32a5a7dd165d38163bd4a7e4457058]
	I0116 03:23:30.266071 1011460 ssh_runner.go:195] Run: which crictl
	I0116 03:23:30.271404 1011460 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0116 03:23:30.271498 1011460 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0116 03:23:30.315307 1011460 cri.go:89] found id: "63e8de06e9ec3ccb1ecfe2142fb84f65efabdec07da804a74ec55b57f677b021"
	I0116 03:23:30.315336 1011460 cri.go:89] found id: ""
	I0116 03:23:30.315346 1011460 logs.go:284] 1 containers: [63e8de06e9ec3ccb1ecfe2142fb84f65efabdec07da804a74ec55b57f677b021]
	I0116 03:23:30.315422 1011460 ssh_runner.go:195] Run: which crictl
	I0116 03:23:30.321045 1011460 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0116 03:23:30.321118 1011460 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0116 03:23:30.370734 1011460 cri.go:89] found id: "153d0b659aaa8533b9004f341817ab607005ff6cb1680ef017005a00267ffda8"
	I0116 03:23:30.370760 1011460 cri.go:89] found id: ""
	I0116 03:23:30.370770 1011460 logs.go:284] 1 containers: [153d0b659aaa8533b9004f341817ab607005ff6cb1680ef017005a00267ffda8]
	I0116 03:23:30.370821 1011460 ssh_runner.go:195] Run: which crictl
	I0116 03:23:30.375705 1011460 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0116 03:23:30.375785 1011460 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0116 03:23:30.415457 1011460 cri.go:89] found id: "997be6a446a806235150d69a5095c196d7a47ada007ba33985478f573234106d"
	I0116 03:23:30.415487 1011460 cri.go:89] found id: ""
	I0116 03:23:30.415498 1011460 logs.go:284] 1 containers: [997be6a446a806235150d69a5095c196d7a47ada007ba33985478f573234106d]
	I0116 03:23:30.415569 1011460 ssh_runner.go:195] Run: which crictl
	I0116 03:23:30.420117 1011460 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0116 03:23:30.420209 1011460 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0116 03:23:30.461056 1011460 cri.go:89] found id: ""
	I0116 03:23:30.461093 1011460 logs.go:284] 0 containers: []
	W0116 03:23:30.461105 1011460 logs.go:286] No container was found matching "kindnet"
	I0116 03:23:30.461114 1011460 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0116 03:23:30.461186 1011460 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0116 03:23:30.504581 1011460 cri.go:89] found id: "4a915cd4aa42fcabfc0d11618fe4a45d4b28fcbaec26574a133eea4ed0527d19"
	I0116 03:23:30.504616 1011460 cri.go:89] found id: ""
	I0116 03:23:30.504627 1011460 logs.go:284] 1 containers: [4a915cd4aa42fcabfc0d11618fe4a45d4b28fcbaec26574a133eea4ed0527d19]
	I0116 03:23:30.504698 1011460 ssh_runner.go:195] Run: which crictl
	I0116 03:23:30.509619 1011460 logs.go:123] Gathering logs for coredns [229310b5851cfb28499024703302e9d76a32a5a7dd165d38163bd4a7e4457058] ...
	I0116 03:23:30.509670 1011460 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 229310b5851cfb28499024703302e9d76a32a5a7dd165d38163bd4a7e4457058"
	I0116 03:23:30.553986 1011460 logs.go:123] Gathering logs for kube-controller-manager [997be6a446a806235150d69a5095c196d7a47ada007ba33985478f573234106d] ...
	I0116 03:23:30.554027 1011460 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 997be6a446a806235150d69a5095c196d7a47ada007ba33985478f573234106d"
	I0116 03:23:30.613360 1011460 logs.go:123] Gathering logs for CRI-O ...
	I0116 03:23:30.613415 1011460 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0116 03:23:31.049281 1011460 logs.go:123] Gathering logs for dmesg ...
	I0116 03:23:31.049331 1011460 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0116 03:23:31.067692 1011460 logs.go:123] Gathering logs for describe nodes ...
	I0116 03:23:31.067732 1011460 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0116 03:23:31.225415 1011460 logs.go:123] Gathering logs for kube-apiserver [f2403bf8a85e789f4307c4ad0dc429903a20a496b9a9e4ee152f4fb45edeaf5e] ...
	I0116 03:23:31.225457 1011460 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f2403bf8a85e789f4307c4ad0dc429903a20a496b9a9e4ee152f4fb45edeaf5e"
	I0116 03:23:31.288824 1011460 logs.go:123] Gathering logs for etcd [2abc1d37662ffc8c141014651481ef2b375ac1c303eff20d0fcb56cd604a2f8f] ...
	I0116 03:23:31.288865 1011460 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2abc1d37662ffc8c141014651481ef2b375ac1c303eff20d0fcb56cd604a2f8f"
	I0116 03:23:31.349273 1011460 logs.go:123] Gathering logs for container status ...
	I0116 03:23:31.349318 1011460 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0116 03:23:31.398655 1011460 logs.go:123] Gathering logs for kubelet ...
	I0116 03:23:31.398696 1011460 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0116 03:23:31.469496 1011460 logs.go:138] Found kubelet problem: Jan 16 03:19:25 no-preload-934668 kubelet[4294]: W0116 03:19:25.274014    4294 reflector.go:539] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-934668" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-934668' and this object
	W0116 03:23:31.469683 1011460 logs.go:138] Found kubelet problem: Jan 16 03:19:25 no-preload-934668 kubelet[4294]: E0116 03:19:25.274058    4294 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-934668" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-934668' and this object
	W0116 03:23:31.469882 1011460 logs.go:138] Found kubelet problem: Jan 16 03:19:25 no-preload-934668 kubelet[4294]: W0116 03:19:25.277138    4294 reflector.go:539] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-934668" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-934668' and this object
	W0116 03:23:31.470041 1011460 logs.go:138] Found kubelet problem: Jan 16 03:19:25 no-preload-934668 kubelet[4294]: E0116 03:19:25.277170    4294 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-934668" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-934668' and this object
	I0116 03:23:31.493488 1011460 logs.go:123] Gathering logs for kube-scheduler [63e8de06e9ec3ccb1ecfe2142fb84f65efabdec07da804a74ec55b57f677b021] ...
	I0116 03:23:31.493533 1011460 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 63e8de06e9ec3ccb1ecfe2142fb84f65efabdec07da804a74ec55b57f677b021"
	I0116 03:23:31.551159 1011460 logs.go:123] Gathering logs for kube-proxy [153d0b659aaa8533b9004f341817ab607005ff6cb1680ef017005a00267ffda8] ...
	I0116 03:23:31.551200 1011460 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 153d0b659aaa8533b9004f341817ab607005ff6cb1680ef017005a00267ffda8"
	I0116 03:23:31.590293 1011460 logs.go:123] Gathering logs for storage-provisioner [4a915cd4aa42fcabfc0d11618fe4a45d4b28fcbaec26574a133eea4ed0527d19] ...
	I0116 03:23:31.590434 1011460 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4a915cd4aa42fcabfc0d11618fe4a45d4b28fcbaec26574a133eea4ed0527d19"
	I0116 03:23:31.634337 1011460 out.go:309] Setting ErrFile to fd 2...
	I0116 03:23:31.634367 1011460 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0116 03:23:31.634430 1011460 out.go:239] X Problems detected in kubelet:
	W0116 03:23:31.634447 1011460 out.go:239]   Jan 16 03:19:25 no-preload-934668 kubelet[4294]: W0116 03:19:25.274014    4294 reflector.go:539] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-934668" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-934668' and this object
	W0116 03:23:31.634457 1011460 out.go:239]   Jan 16 03:19:25 no-preload-934668 kubelet[4294]: E0116 03:19:25.274058    4294 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-934668" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-934668' and this object
	W0116 03:23:31.634471 1011460 out.go:239]   Jan 16 03:19:25 no-preload-934668 kubelet[4294]: W0116 03:19:25.277138    4294 reflector.go:539] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-934668" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-934668' and this object
	W0116 03:23:31.634476 1011460 out.go:239]   Jan 16 03:19:25 no-preload-934668 kubelet[4294]: E0116 03:19:25.277170    4294 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-934668" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-934668' and this object
	I0116 03:23:31.634485 1011460 out.go:309] Setting ErrFile to fd 2...
	I0116 03:23:31.634490 1011460 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 03:23:41.635544 1011460 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:23:41.654207 1011460 api_server.go:72] duration metric: took 4m16.125890122s to wait for apiserver process to appear ...
	I0116 03:23:41.654244 1011460 api_server.go:88] waiting for apiserver healthz status ...
	I0116 03:23:41.654312 1011460 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0116 03:23:41.654391 1011460 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0116 03:23:41.704947 1011460 cri.go:89] found id: "f2403bf8a85e789f4307c4ad0dc429903a20a496b9a9e4ee152f4fb45edeaf5e"
	I0116 03:23:41.704976 1011460 cri.go:89] found id: ""
	I0116 03:23:41.704984 1011460 logs.go:284] 1 containers: [f2403bf8a85e789f4307c4ad0dc429903a20a496b9a9e4ee152f4fb45edeaf5e]
	I0116 03:23:41.705042 1011460 ssh_runner.go:195] Run: which crictl
	I0116 03:23:41.710602 1011460 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0116 03:23:41.710687 1011460 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0116 03:23:41.754322 1011460 cri.go:89] found id: "2abc1d37662ffc8c141014651481ef2b375ac1c303eff20d0fcb56cd604a2f8f"
	I0116 03:23:41.754356 1011460 cri.go:89] found id: ""
	I0116 03:23:41.754368 1011460 logs.go:284] 1 containers: [2abc1d37662ffc8c141014651481ef2b375ac1c303eff20d0fcb56cd604a2f8f]
	I0116 03:23:41.754437 1011460 ssh_runner.go:195] Run: which crictl
	I0116 03:23:41.760172 1011460 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0116 03:23:41.760283 1011460 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0116 03:23:41.810626 1011460 cri.go:89] found id: "229310b5851cfb28499024703302e9d76a32a5a7dd165d38163bd4a7e4457058"
	I0116 03:23:41.810664 1011460 cri.go:89] found id: ""
	I0116 03:23:41.810674 1011460 logs.go:284] 1 containers: [229310b5851cfb28499024703302e9d76a32a5a7dd165d38163bd4a7e4457058]
	I0116 03:23:41.810749 1011460 ssh_runner.go:195] Run: which crictl
	I0116 03:23:41.815588 1011460 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0116 03:23:41.815687 1011460 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0116 03:23:41.859547 1011460 cri.go:89] found id: "63e8de06e9ec3ccb1ecfe2142fb84f65efabdec07da804a74ec55b57f677b021"
	I0116 03:23:41.859573 1011460 cri.go:89] found id: ""
	I0116 03:23:41.859580 1011460 logs.go:284] 1 containers: [63e8de06e9ec3ccb1ecfe2142fb84f65efabdec07da804a74ec55b57f677b021]
	I0116 03:23:41.859637 1011460 ssh_runner.go:195] Run: which crictl
	I0116 03:23:41.864333 1011460 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0116 03:23:41.864416 1011460 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0116 03:23:41.914604 1011460 cri.go:89] found id: "153d0b659aaa8533b9004f341817ab607005ff6cb1680ef017005a00267ffda8"
	I0116 03:23:41.914638 1011460 cri.go:89] found id: ""
	I0116 03:23:41.914648 1011460 logs.go:284] 1 containers: [153d0b659aaa8533b9004f341817ab607005ff6cb1680ef017005a00267ffda8]
	I0116 03:23:41.914718 1011460 ssh_runner.go:195] Run: which crictl
	I0116 03:23:41.919459 1011460 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0116 03:23:41.919538 1011460 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0116 03:23:41.965709 1011460 cri.go:89] found id: "997be6a446a806235150d69a5095c196d7a47ada007ba33985478f573234106d"
	I0116 03:23:41.965751 1011460 cri.go:89] found id: ""
	I0116 03:23:41.965763 1011460 logs.go:284] 1 containers: [997be6a446a806235150d69a5095c196d7a47ada007ba33985478f573234106d]
	I0116 03:23:41.965857 1011460 ssh_runner.go:195] Run: which crictl
	I0116 03:23:41.970346 1011460 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0116 03:23:41.970445 1011460 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0116 03:23:42.017222 1011460 cri.go:89] found id: ""
	I0116 03:23:42.017253 1011460 logs.go:284] 0 containers: []
	W0116 03:23:42.017265 1011460 logs.go:286] No container was found matching "kindnet"
	I0116 03:23:42.017275 1011460 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0116 03:23:42.017341 1011460 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0116 03:23:42.065935 1011460 cri.go:89] found id: "4a915cd4aa42fcabfc0d11618fe4a45d4b28fcbaec26574a133eea4ed0527d19"
	I0116 03:23:42.065967 1011460 cri.go:89] found id: ""
	I0116 03:23:42.065977 1011460 logs.go:284] 1 containers: [4a915cd4aa42fcabfc0d11618fe4a45d4b28fcbaec26574a133eea4ed0527d19]
	I0116 03:23:42.066041 1011460 ssh_runner.go:195] Run: which crictl
	I0116 03:23:42.070695 1011460 logs.go:123] Gathering logs for CRI-O ...
	I0116 03:23:42.070722 1011460 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0116 03:23:42.440423 1011460 logs.go:123] Gathering logs for kubelet ...
	I0116 03:23:42.440483 1011460 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0116 03:23:42.514598 1011460 logs.go:138] Found kubelet problem: Jan 16 03:19:25 no-preload-934668 kubelet[4294]: W0116 03:19:25.274014    4294 reflector.go:539] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-934668" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-934668' and this object
	W0116 03:23:42.514770 1011460 logs.go:138] Found kubelet problem: Jan 16 03:19:25 no-preload-934668 kubelet[4294]: E0116 03:19:25.274058    4294 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-934668" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-934668' and this object
	W0116 03:23:42.514914 1011460 logs.go:138] Found kubelet problem: Jan 16 03:19:25 no-preload-934668 kubelet[4294]: W0116 03:19:25.277138    4294 reflector.go:539] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-934668" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-934668' and this object
	W0116 03:23:42.515087 1011460 logs.go:138] Found kubelet problem: Jan 16 03:19:25 no-preload-934668 kubelet[4294]: E0116 03:19:25.277170    4294 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-934668" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-934668' and this object
	I0116 03:23:42.539532 1011460 logs.go:123] Gathering logs for describe nodes ...
	I0116 03:23:42.539575 1011460 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0116 03:23:42.708733 1011460 logs.go:123] Gathering logs for coredns [229310b5851cfb28499024703302e9d76a32a5a7dd165d38163bd4a7e4457058] ...
	I0116 03:23:42.708775 1011460 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 229310b5851cfb28499024703302e9d76a32a5a7dd165d38163bd4a7e4457058"
	I0116 03:23:42.792841 1011460 logs.go:123] Gathering logs for kube-scheduler [63e8de06e9ec3ccb1ecfe2142fb84f65efabdec07da804a74ec55b57f677b021] ...
	I0116 03:23:42.792886 1011460 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 63e8de06e9ec3ccb1ecfe2142fb84f65efabdec07da804a74ec55b57f677b021"
	I0116 03:23:42.860086 1011460 logs.go:123] Gathering logs for kube-proxy [153d0b659aaa8533b9004f341817ab607005ff6cb1680ef017005a00267ffda8] ...
	I0116 03:23:42.860130 1011460 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 153d0b659aaa8533b9004f341817ab607005ff6cb1680ef017005a00267ffda8"
	I0116 03:23:42.906116 1011460 logs.go:123] Gathering logs for kube-controller-manager [997be6a446a806235150d69a5095c196d7a47ada007ba33985478f573234106d] ...
	I0116 03:23:42.906156 1011460 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 997be6a446a806235150d69a5095c196d7a47ada007ba33985478f573234106d"
	I0116 03:23:42.962172 1011460 logs.go:123] Gathering logs for storage-provisioner [4a915cd4aa42fcabfc0d11618fe4a45d4b28fcbaec26574a133eea4ed0527d19] ...
	I0116 03:23:42.962220 1011460 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4a915cd4aa42fcabfc0d11618fe4a45d4b28fcbaec26574a133eea4ed0527d19"
	I0116 03:23:43.001097 1011460 logs.go:123] Gathering logs for dmesg ...
	I0116 03:23:43.001133 1011460 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0116 03:23:43.017487 1011460 logs.go:123] Gathering logs for kube-apiserver [f2403bf8a85e789f4307c4ad0dc429903a20a496b9a9e4ee152f4fb45edeaf5e] ...
	I0116 03:23:43.017533 1011460 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f2403bf8a85e789f4307c4ad0dc429903a20a496b9a9e4ee152f4fb45edeaf5e"
	I0116 03:23:43.077368 1011460 logs.go:123] Gathering logs for etcd [2abc1d37662ffc8c141014651481ef2b375ac1c303eff20d0fcb56cd604a2f8f] ...
	I0116 03:23:43.077408 1011460 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2abc1d37662ffc8c141014651481ef2b375ac1c303eff20d0fcb56cd604a2f8f"
	I0116 03:23:43.125553 1011460 logs.go:123] Gathering logs for container status ...
	I0116 03:23:43.125587 1011460 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0116 03:23:43.175165 1011460 out.go:309] Setting ErrFile to fd 2...
	I0116 03:23:43.175195 1011460 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0116 03:23:43.175256 1011460 out.go:239] X Problems detected in kubelet:
	W0116 03:23:43.175268 1011460 out.go:239]   Jan 16 03:19:25 no-preload-934668 kubelet[4294]: W0116 03:19:25.274014    4294 reflector.go:539] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-934668" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-934668' and this object
	W0116 03:23:43.175279 1011460 out.go:239]   Jan 16 03:19:25 no-preload-934668 kubelet[4294]: E0116 03:19:25.274058    4294 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-934668" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-934668' and this object
	W0116 03:23:43.175292 1011460 out.go:239]   Jan 16 03:19:25 no-preload-934668 kubelet[4294]: W0116 03:19:25.277138    4294 reflector.go:539] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-934668" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-934668' and this object
	W0116 03:23:43.175300 1011460 out.go:239]   Jan 16 03:19:25 no-preload-934668 kubelet[4294]: E0116 03:19:25.277170    4294 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-934668" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-934668' and this object
	I0116 03:23:43.175308 1011460 out.go:309] Setting ErrFile to fd 2...
	I0116 03:23:43.175316 1011460 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 03:23:53.176994 1011460 api_server.go:253] Checking apiserver healthz at https://192.168.50.29:8443/healthz ...
	I0116 03:23:53.183515 1011460 api_server.go:279] https://192.168.50.29:8443/healthz returned 200:
	ok
	I0116 03:23:53.185020 1011460 api_server.go:141] control plane version: v1.29.0-rc.2
	I0116 03:23:53.185050 1011460 api_server.go:131] duration metric: took 11.530797787s to wait for apiserver health ...
	I0116 03:23:53.185061 1011460 system_pods.go:43] waiting for kube-system pods to appear ...
	I0116 03:23:53.185092 1011460 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0116 03:23:53.185148 1011460 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0116 03:23:53.234245 1011460 cri.go:89] found id: "f2403bf8a85e789f4307c4ad0dc429903a20a496b9a9e4ee152f4fb45edeaf5e"
	I0116 03:23:53.234274 1011460 cri.go:89] found id: ""
	I0116 03:23:53.234284 1011460 logs.go:284] 1 containers: [f2403bf8a85e789f4307c4ad0dc429903a20a496b9a9e4ee152f4fb45edeaf5e]
	I0116 03:23:53.234356 1011460 ssh_runner.go:195] Run: which crictl
	I0116 03:23:53.239078 1011460 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0116 03:23:53.239169 1011460 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0116 03:23:53.286989 1011460 cri.go:89] found id: "2abc1d37662ffc8c141014651481ef2b375ac1c303eff20d0fcb56cd604a2f8f"
	I0116 03:23:53.287021 1011460 cri.go:89] found id: ""
	I0116 03:23:53.287031 1011460 logs.go:284] 1 containers: [2abc1d37662ffc8c141014651481ef2b375ac1c303eff20d0fcb56cd604a2f8f]
	I0116 03:23:53.287106 1011460 ssh_runner.go:195] Run: which crictl
	I0116 03:23:53.291809 1011460 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0116 03:23:53.291898 1011460 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0116 03:23:53.342514 1011460 cri.go:89] found id: "229310b5851cfb28499024703302e9d76a32a5a7dd165d38163bd4a7e4457058"
	I0116 03:23:53.342549 1011460 cri.go:89] found id: ""
	I0116 03:23:53.342560 1011460 logs.go:284] 1 containers: [229310b5851cfb28499024703302e9d76a32a5a7dd165d38163bd4a7e4457058]
	I0116 03:23:53.342644 1011460 ssh_runner.go:195] Run: which crictl
	I0116 03:23:53.347443 1011460 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0116 03:23:53.347536 1011460 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0116 03:23:53.407101 1011460 cri.go:89] found id: "63e8de06e9ec3ccb1ecfe2142fb84f65efabdec07da804a74ec55b57f677b021"
	I0116 03:23:53.407129 1011460 cri.go:89] found id: ""
	I0116 03:23:53.407139 1011460 logs.go:284] 1 containers: [63e8de06e9ec3ccb1ecfe2142fb84f65efabdec07da804a74ec55b57f677b021]
	I0116 03:23:53.407204 1011460 ssh_runner.go:195] Run: which crictl
	I0116 03:23:53.411444 1011460 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0116 03:23:53.411526 1011460 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0116 03:23:53.451514 1011460 cri.go:89] found id: "153d0b659aaa8533b9004f341817ab607005ff6cb1680ef017005a00267ffda8"
	I0116 03:23:53.451538 1011460 cri.go:89] found id: ""
	I0116 03:23:53.451545 1011460 logs.go:284] 1 containers: [153d0b659aaa8533b9004f341817ab607005ff6cb1680ef017005a00267ffda8]
	I0116 03:23:53.451613 1011460 ssh_runner.go:195] Run: which crictl
	I0116 03:23:53.455819 1011460 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0116 03:23:53.455907 1011460 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0116 03:23:53.498341 1011460 cri.go:89] found id: "997be6a446a806235150d69a5095c196d7a47ada007ba33985478f573234106d"
	I0116 03:23:53.498372 1011460 cri.go:89] found id: ""
	I0116 03:23:53.498385 1011460 logs.go:284] 1 containers: [997be6a446a806235150d69a5095c196d7a47ada007ba33985478f573234106d]
	I0116 03:23:53.498456 1011460 ssh_runner.go:195] Run: which crictl
	I0116 03:23:53.503007 1011460 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0116 03:23:53.503075 1011460 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0116 03:23:53.549549 1011460 cri.go:89] found id: ""
	I0116 03:23:53.549585 1011460 logs.go:284] 0 containers: []
	W0116 03:23:53.549597 1011460 logs.go:286] No container was found matching "kindnet"
	I0116 03:23:53.549606 1011460 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0116 03:23:53.549676 1011460 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0116 03:23:53.590624 1011460 cri.go:89] found id: "4a915cd4aa42fcabfc0d11618fe4a45d4b28fcbaec26574a133eea4ed0527d19"
	I0116 03:23:53.590655 1011460 cri.go:89] found id: ""
	I0116 03:23:53.590672 1011460 logs.go:284] 1 containers: [4a915cd4aa42fcabfc0d11618fe4a45d4b28fcbaec26574a133eea4ed0527d19]
	I0116 03:23:53.590743 1011460 ssh_runner.go:195] Run: which crictl
	I0116 03:23:53.594912 1011460 logs.go:123] Gathering logs for etcd [2abc1d37662ffc8c141014651481ef2b375ac1c303eff20d0fcb56cd604a2f8f] ...
	I0116 03:23:53.594950 1011460 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2abc1d37662ffc8c141014651481ef2b375ac1c303eff20d0fcb56cd604a2f8f"
	I0116 03:23:53.644842 1011460 logs.go:123] Gathering logs for CRI-O ...
	I0116 03:23:53.644885 1011460 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0116 03:23:54.036154 1011460 logs.go:123] Gathering logs for container status ...
	I0116 03:23:54.036221 1011460 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0116 03:23:54.096374 1011460 logs.go:123] Gathering logs for kubelet ...
	I0116 03:23:54.096416 1011460 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0116 03:23:54.170840 1011460 logs.go:138] Found kubelet problem: Jan 16 03:19:25 no-preload-934668 kubelet[4294]: W0116 03:19:25.274014    4294 reflector.go:539] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-934668" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-934668' and this object
	W0116 03:23:54.171084 1011460 logs.go:138] Found kubelet problem: Jan 16 03:19:25 no-preload-934668 kubelet[4294]: E0116 03:19:25.274058    4294 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-934668" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-934668' and this object
	W0116 03:23:54.171231 1011460 logs.go:138] Found kubelet problem: Jan 16 03:19:25 no-preload-934668 kubelet[4294]: W0116 03:19:25.277138    4294 reflector.go:539] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-934668" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-934668' and this object
	W0116 03:23:54.171388 1011460 logs.go:138] Found kubelet problem: Jan 16 03:19:25 no-preload-934668 kubelet[4294]: E0116 03:19:25.277170    4294 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-934668" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-934668' and this object
	I0116 03:23:54.197037 1011460 logs.go:123] Gathering logs for kube-apiserver [f2403bf8a85e789f4307c4ad0dc429903a20a496b9a9e4ee152f4fb45edeaf5e] ...
	I0116 03:23:54.197086 1011460 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f2403bf8a85e789f4307c4ad0dc429903a20a496b9a9e4ee152f4fb45edeaf5e"
	I0116 03:23:54.254502 1011460 logs.go:123] Gathering logs for coredns [229310b5851cfb28499024703302e9d76a32a5a7dd165d38163bd4a7e4457058] ...
	I0116 03:23:54.254558 1011460 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 229310b5851cfb28499024703302e9d76a32a5a7dd165d38163bd4a7e4457058"
	I0116 03:23:54.296951 1011460 logs.go:123] Gathering logs for kube-scheduler [63e8de06e9ec3ccb1ecfe2142fb84f65efabdec07da804a74ec55b57f677b021] ...
	I0116 03:23:54.296999 1011460 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 63e8de06e9ec3ccb1ecfe2142fb84f65efabdec07da804a74ec55b57f677b021"
	I0116 03:23:54.353946 1011460 logs.go:123] Gathering logs for kube-proxy [153d0b659aaa8533b9004f341817ab607005ff6cb1680ef017005a00267ffda8] ...
	I0116 03:23:54.354001 1011460 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 153d0b659aaa8533b9004f341817ab607005ff6cb1680ef017005a00267ffda8"
	I0116 03:23:54.399575 1011460 logs.go:123] Gathering logs for kube-controller-manager [997be6a446a806235150d69a5095c196d7a47ada007ba33985478f573234106d] ...
	I0116 03:23:54.399609 1011460 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 997be6a446a806235150d69a5095c196d7a47ada007ba33985478f573234106d"
	I0116 03:23:54.463603 1011460 logs.go:123] Gathering logs for storage-provisioner [4a915cd4aa42fcabfc0d11618fe4a45d4b28fcbaec26574a133eea4ed0527d19] ...
	I0116 03:23:54.463643 1011460 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4a915cd4aa42fcabfc0d11618fe4a45d4b28fcbaec26574a133eea4ed0527d19"
	I0116 03:23:54.508557 1011460 logs.go:123] Gathering logs for dmesg ...
	I0116 03:23:54.508594 1011460 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0116 03:23:54.522542 1011460 logs.go:123] Gathering logs for describe nodes ...
	I0116 03:23:54.522574 1011460 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0116 03:23:54.653996 1011460 out.go:309] Setting ErrFile to fd 2...
	I0116 03:23:54.654029 1011460 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0116 03:23:54.654095 1011460 out.go:239] X Problems detected in kubelet:
	W0116 03:23:54.654115 1011460 out.go:239]   Jan 16 03:19:25 no-preload-934668 kubelet[4294]: W0116 03:19:25.274014    4294 reflector.go:539] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-934668" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-934668' and this object
	W0116 03:23:54.654128 1011460 out.go:239]   Jan 16 03:19:25 no-preload-934668 kubelet[4294]: E0116 03:19:25.274058    4294 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-934668" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-934668' and this object
	W0116 03:23:54.654140 1011460 out.go:239]   Jan 16 03:19:25 no-preload-934668 kubelet[4294]: W0116 03:19:25.277138    4294 reflector.go:539] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-934668" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-934668' and this object
	W0116 03:23:54.654148 1011460 out.go:239]   Jan 16 03:19:25 no-preload-934668 kubelet[4294]: E0116 03:19:25.277170    4294 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-934668" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-934668' and this object
	I0116 03:23:54.654158 1011460 out.go:309] Setting ErrFile to fd 2...
	I0116 03:23:54.654167 1011460 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 03:24:04.664925 1011460 system_pods.go:59] 8 kube-system pods found
	I0116 03:24:04.664971 1011460 system_pods.go:61] "coredns-76f75df574-k2kc7" [d05aee05-aff7-4500-b656-8f66a3f622d2] Running
	I0116 03:24:04.664978 1011460 system_pods.go:61] "etcd-no-preload-934668" [b927b4df-f865-400c-9277-32778f7c5e30] Running
	I0116 03:24:04.664986 1011460 system_pods.go:61] "kube-apiserver-no-preload-934668" [648abde5-ec7c-4fd4-81e5-734ac6e631fc] Running
	I0116 03:24:04.664994 1011460 system_pods.go:61] "kube-controller-manager-no-preload-934668" [8a568dfa-e657-47e8-b369-c02a31271e58] Running
	I0116 03:24:04.664998 1011460 system_pods.go:61] "kube-proxy-fr424" [f24ae333-7f56-47bf-b66f-3192010a2cc4] Running
	I0116 03:24:04.665003 1011460 system_pods.go:61] "kube-scheduler-no-preload-934668" [fc295053-1d78-4f15-91f8-41330bf47c1a] Running
	I0116 03:24:04.665013 1011460 system_pods.go:61] "metrics-server-57f55c9bc5-6w2t7" [5169514b-c507-4e5e-b607-6806f6e32801] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:24:04.665019 1011460 system_pods.go:61] "storage-provisioner" [eb4f416a-8bdc-4a7c-bea1-14015339520b] Running
	I0116 03:24:04.665027 1011460 system_pods.go:74] duration metric: took 11.479959039s to wait for pod list to return data ...
	I0116 03:24:04.665042 1011460 default_sa.go:34] waiting for default service account to be created ...
	I0116 03:24:04.668183 1011460 default_sa.go:45] found service account: "default"
	I0116 03:24:04.668217 1011460 default_sa.go:55] duration metric: took 3.167177ms for default service account to be created ...
	I0116 03:24:04.668228 1011460 system_pods.go:116] waiting for k8s-apps to be running ...
	I0116 03:24:04.674701 1011460 system_pods.go:86] 8 kube-system pods found
	I0116 03:24:04.674736 1011460 system_pods.go:89] "coredns-76f75df574-k2kc7" [d05aee05-aff7-4500-b656-8f66a3f622d2] Running
	I0116 03:24:04.674742 1011460 system_pods.go:89] "etcd-no-preload-934668" [b927b4df-f865-400c-9277-32778f7c5e30] Running
	I0116 03:24:04.674747 1011460 system_pods.go:89] "kube-apiserver-no-preload-934668" [648abde5-ec7c-4fd4-81e5-734ac6e631fc] Running
	I0116 03:24:04.674752 1011460 system_pods.go:89] "kube-controller-manager-no-preload-934668" [8a568dfa-e657-47e8-b369-c02a31271e58] Running
	I0116 03:24:04.674756 1011460 system_pods.go:89] "kube-proxy-fr424" [f24ae333-7f56-47bf-b66f-3192010a2cc4] Running
	I0116 03:24:04.674760 1011460 system_pods.go:89] "kube-scheduler-no-preload-934668" [fc295053-1d78-4f15-91f8-41330bf47c1a] Running
	I0116 03:24:04.674766 1011460 system_pods.go:89] "metrics-server-57f55c9bc5-6w2t7" [5169514b-c507-4e5e-b607-6806f6e32801] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:24:04.674771 1011460 system_pods.go:89] "storage-provisioner" [eb4f416a-8bdc-4a7c-bea1-14015339520b] Running
	I0116 03:24:04.674780 1011460 system_pods.go:126] duration metric: took 6.545541ms to wait for k8s-apps to be running ...
	I0116 03:24:04.674794 1011460 system_svc.go:44] waiting for kubelet service to be running ....
	I0116 03:24:04.674845 1011460 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 03:24:04.692060 1011460 system_svc.go:56] duration metric: took 17.248436ms WaitForService to wait for kubelet.
	I0116 03:24:04.692099 1011460 kubeadm.go:581] duration metric: took 4m39.163790794s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0116 03:24:04.692129 1011460 node_conditions.go:102] verifying NodePressure condition ...
	I0116 03:24:04.696664 1011460 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 03:24:04.696709 1011460 node_conditions.go:123] node cpu capacity is 2
	I0116 03:24:04.696728 1011460 node_conditions.go:105] duration metric: took 4.592869ms to run NodePressure ...
	I0116 03:24:04.696745 1011460 start.go:228] waiting for startup goroutines ...
	I0116 03:24:04.696755 1011460 start.go:233] waiting for cluster config update ...
	I0116 03:24:04.696770 1011460 start.go:242] writing updated cluster config ...
	I0116 03:24:04.697135 1011460 ssh_runner.go:195] Run: rm -f paused
	I0116 03:24:04.750649 1011460 start.go:600] kubectl: 1.29.0, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0116 03:24:04.752669 1011460 out.go:177] * Done! kubectl is now configured to use "no-preload-934668" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	-- Journal begins at Tue 2024-01-16 03:13:01 UTC, ends at Tue 2024-01-16 03:32:02 UTC. --
	Jan 16 03:32:02 old-k8s-version-788237 crio[714]: time="2024-01-16 03:32:02.127972398Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:39f3d7fe5482f24d0405541c570378b116cdad4537f37ecd47a184e266678440,PodSandboxId:a7674dd11cbc95573f5a04933204bad08362a263485fa925f821ab54ea8e422a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705375136967703370,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa5fac91-1606-4716-a04a-18e9d80c926b,},Annotations:map[string]string{io.kubernetes.container.hash: 3311cea0,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccddac0572d056ac2605ff0ddb55170ab09f4e0657bcb57b1a8faab6d748f1ae,PodSandboxId:b9cd1654d0aa8be03fd80ca5af350b324f6196fea89934eb5d6d5dd938c924ce,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1705375136457781786,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tv7gz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1bf5e59-b2ae-489c-8297-25ad7c456303,},Annotations:map[string]string{io.kubernetes.container.hash: 52d989bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd98624191993b478451414500a1a65222fd5fa43a8947d332d980468fc0a67a,PodSandboxId:be8a8f92ce56eb65b9b241795b17f6a11b3acf0d096a5a926884629c8906873b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1705375135624333509,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-qmzl6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e4b23ca-c18b-4158-b15b-df53326b384c,},Annotations:map[string]string{io.kubernetes.container.hash: 4a86af59,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e327d721f3f2f3b4ab0bbb5f1278403fea05f4be2b069c149c00c4a8c8a45fff,PodSandboxId:8db08f325ea8f04c81080318269bee2998c25d84a01408fb42c9a4101efad5b1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1705375109035882891,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-788237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c003f74fd30a9694a059c0d4138e96d,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 1a366b22,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c8ff8ca133a1b2c457758f11318941da96fd0179f38c83dff5592244df7855b,PodSandboxId:7788241941dcfa5597a07cb17ab04e7385db133cde9f7d3fcb79b38c69f99b44,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1705375107770737585,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-788237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437
bcb4e,},Annotations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0478c9a69e8126feecf553fe27b1082236f91220b1da2fa67d23a45755014ee9,PodSandboxId:d6c1775cb397c96897a8de69200128e362f2c5848fb82bc56c1bda139fd469de,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1705375107533635031,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-788237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Ann
otations:map[string]string{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f47fadd92bab56c45d6d368728025ede6d264522c3f0cfd263543f5d68ae0e2,PodSandboxId:bb471801af694890715cc7a3b8d88168a940912300e3f82a83b4293f06d11dea,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1705375106921765751,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-788237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6230b846da04f59ee8bd2493df23aee6,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 96857140,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c79c8713cf40509eb13eda6bc933e764ae4bb84455c0a1ed89ca21d32bc667cc,PodSandboxId:bb471801af694890715cc7a3b8d88168a940912300e3f82a83b4293f06d11dea,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_EXITED,CreatedAt:1705374811839259892,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-788237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6230b846da04f59ee8bd2493df23aee6,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 96857140,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=718edc97-3977-4ebe-91ef-a7cc00e453be name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 03:32:02 old-k8s-version-788237 crio[714]: time="2024-01-16 03:32:02.151318752Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=f9c7b883-0e21-4b91-819c-88c354ca8fad name=/runtime.v1alpha2.RuntimeService/ListPodSandbox
	Jan 16 03:32:02 old-k8s-version-788237 crio[714]: time="2024-01-16 03:32:02.151615330Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:977bd48eb55f43cde218a057913f8e157f9faf5db389607253f79e912a0aaf3a,Metadata:&PodSandboxMetadata{Name:metrics-server-74d5856cc6-tx8jt,Uid:790d860a-d27f-4535-94d8-64f40cb79071,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1705375136595811055,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-74d5856cc6-tx8jt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 790d860a-d27f-4535-94d8-64f40cb79071,k8s-app: metrics-server,pod-template-hash: 74d5856cc6,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-16T03:18:56.24741712Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a7674dd11cbc95573f5a04933204bad08362a263485fa925f821ab54ea8e422a,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:aa5fac91-1606-4716-a04a-18e9d80c926
b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1705375135516849241,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa5fac91-1606-4716-a04a-18e9d80c926b,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"
volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-01-16T03:18:55.158636457Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:be8a8f92ce56eb65b9b241795b17f6a11b3acf0d096a5a926884629c8906873b,Metadata:&PodSandboxMetadata{Name:coredns-5644d7b6d9-qmzl6,Uid:3e4b23ca-c18b-4158-b15b-df53326b384c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1705375135170315033,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5644d7b6d9-qmzl6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e4b23ca-c18b-4158-b15b-df53326b384c,k8s-app: kube-dns,pod-template-hash: 5644d7b6d9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-16T03:18:54.82529603Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b9cd1654d0aa8be03fd80ca5af350b324f6196fea89934eb5d6d5dd938c924ce,Metadata:&PodSandboxMetadata{Name:kube-proxy-tv7gz,Uid:a1bf5e59-b2ae-489c-8297-
25ad7c456303,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1705375134340838575,Labels:map[string]string{controller-revision-hash: 68594d95c,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-tv7gz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1bf5e59-b2ae-489c-8297-25ad7c456303,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-16T03:18:53.995352307Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8db08f325ea8f04c81080318269bee2998c25d84a01408fb42c9a4101efad5b1,Metadata:&PodSandboxMetadata{Name:etcd-old-k8s-version-788237,Uid:2c003f74fd30a9694a059c0d4138e96d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1705375106987675117,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-old-k8s-version-788237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c003f74fd30a9694a059c0d4138e96d,tier: control
-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 2c003f74fd30a9694a059c0d4138e96d,kubernetes.io/config.seen: 2024-01-16T03:18:26.596267947Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:7788241941dcfa5597a07cb17ab04e7385db133cde9f7d3fcb79b38c69f99b44,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-old-k8s-version-788237,Uid:7376ddb4f190a0ded9394063437bcb4e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1705375106982705959,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-788237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 7376ddb4f190a0ded9394063437bcb4e,kubernetes.io/config.seen: 2024-01-16T03:18:26.587964681Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:d6c1775cb397c96897a8de69200128
e362f2c5848fb82bc56c1bda139fd469de,Metadata:&PodSandboxMetadata{Name:kube-scheduler-old-k8s-version-788237,Uid:b3d303074fe0ca1d42a8bd9ed248df09,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1705375106978217493,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-788237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: b3d303074fe0ca1d42a8bd9ed248df09,kubernetes.io/config.seen: 2024-01-16T03:18:26.590675663Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:bb471801af694890715cc7a3b8d88168a940912300e3f82a83b4293f06d11dea,Metadata:&PodSandboxMetadata{Name:kube-apiserver-old-k8s-version-788237,Uid:6230b846da04f59ee8bd2493df23aee6,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1705374811151553828,Labels:map[string]string{component: kube-apiserver,io.k
ubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-788237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6230b846da04f59ee8bd2493df23aee6,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 6230b846da04f59ee8bd2493df23aee6,kubernetes.io/config.seen: 2024-01-16T03:13:30.275973001Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=f9c7b883-0e21-4b91-819c-88c354ca8fad name=/runtime.v1alpha2.RuntimeService/ListPodSandbox
	Jan 16 03:32:02 old-k8s-version-788237 crio[714]: time="2024-01-16 03:32:02.152237798Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=c5934990-419f-47ca-b060-d244f2ef711b name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jan 16 03:32:02 old-k8s-version-788237 crio[714]: time="2024-01-16 03:32:02.152314603Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=c5934990-419f-47ca-b060-d244f2ef711b name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jan 16 03:32:02 old-k8s-version-788237 crio[714]: time="2024-01-16 03:32:02.152590827Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:39f3d7fe5482f24d0405541c570378b116cdad4537f37ecd47a184e266678440,PodSandboxId:a7674dd11cbc95573f5a04933204bad08362a263485fa925f821ab54ea8e422a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705375136967703370,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa5fac91-1606-4716-a04a-18e9d80c926b,},Annotations:map[string]string{io.kubernetes.container.hash: 3311cea0,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccddac0572d056ac2605ff0ddb55170ab09f4e0657bcb57b1a8faab6d748f1ae,PodSandboxId:b9cd1654d0aa8be03fd80ca5af350b324f6196fea89934eb5d6d5dd938c924ce,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1705375136457781786,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tv7gz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1bf5e59-b2ae-489c-8297-25ad7c456303,},Annotations:map[string]string{io.kubernetes.container.hash: 52d989bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd98624191993b478451414500a1a65222fd5fa43a8947d332d980468fc0a67a,PodSandboxId:be8a8f92ce56eb65b9b241795b17f6a11b3acf0d096a5a926884629c8906873b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1705375135624333509,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-qmzl6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e4b23ca-c18b-4158-b15b-df53326b384c,},Annotations:map[string]string{io.kubernetes.container.hash: 4a86af59,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e327d721f3f2f3b4ab0bbb5f1278403fea05f4be2b069c149c00c4a8c8a45fff,PodSandboxId:8db08f325ea8f04c81080318269bee2998c25d84a01408fb42c9a4101efad5b1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1705375109035882891,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-788237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c003f74fd30a9694a059c0d4138e96d,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 1a366b22,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c8ff8ca133a1b2c457758f11318941da96fd0179f38c83dff5592244df7855b,PodSandboxId:7788241941dcfa5597a07cb17ab04e7385db133cde9f7d3fcb79b38c69f99b44,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1705375107770737585,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-788237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437
bcb4e,},Annotations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0478c9a69e8126feecf553fe27b1082236f91220b1da2fa67d23a45755014ee9,PodSandboxId:d6c1775cb397c96897a8de69200128e362f2c5848fb82bc56c1bda139fd469de,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1705375107533635031,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-788237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Ann
otations:map[string]string{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f47fadd92bab56c45d6d368728025ede6d264522c3f0cfd263543f5d68ae0e2,PodSandboxId:bb471801af694890715cc7a3b8d88168a940912300e3f82a83b4293f06d11dea,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1705375106921765751,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-788237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6230b846da04f59ee8bd2493df23aee6,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 96857140,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=c5934990-419f-47ca-b060-d244f2ef711b name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jan 16 03:32:02 old-k8s-version-788237 crio[714]: time="2024-01-16 03:32:02.153342650Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=3edc3cd1-9120-42ec-8fdd-2978d7d33743 name=/runtime.v1alpha2.RuntimeService/ListPodSandbox
	Jan 16 03:32:02 old-k8s-version-788237 crio[714]: time="2024-01-16 03:32:02.153648412Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:977bd48eb55f43cde218a057913f8e157f9faf5db389607253f79e912a0aaf3a,Metadata:&PodSandboxMetadata{Name:metrics-server-74d5856cc6-tx8jt,Uid:790d860a-d27f-4535-94d8-64f40cb79071,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1705375136595811055,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-74d5856cc6-tx8jt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 790d860a-d27f-4535-94d8-64f40cb79071,k8s-app: metrics-server,pod-template-hash: 74d5856cc6,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-16T03:18:56.24741712Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a7674dd11cbc95573f5a04933204bad08362a263485fa925f821ab54ea8e422a,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:aa5fac91-1606-4716-a04a-18e9d80c926
b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1705375135516849241,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa5fac91-1606-4716-a04a-18e9d80c926b,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"
volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-01-16T03:18:55.158636457Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:be8a8f92ce56eb65b9b241795b17f6a11b3acf0d096a5a926884629c8906873b,Metadata:&PodSandboxMetadata{Name:coredns-5644d7b6d9-qmzl6,Uid:3e4b23ca-c18b-4158-b15b-df53326b384c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1705375135170315033,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5644d7b6d9-qmzl6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e4b23ca-c18b-4158-b15b-df53326b384c,k8s-app: kube-dns,pod-template-hash: 5644d7b6d9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-16T03:18:54.82529603Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b9cd1654d0aa8be03fd80ca5af350b324f6196fea89934eb5d6d5dd938c924ce,Metadata:&PodSandboxMetadata{Name:kube-proxy-tv7gz,Uid:a1bf5e59-b2ae-489c-8297-
25ad7c456303,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1705375134340838575,Labels:map[string]string{controller-revision-hash: 68594d95c,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-tv7gz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1bf5e59-b2ae-489c-8297-25ad7c456303,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-16T03:18:53.995352307Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8db08f325ea8f04c81080318269bee2998c25d84a01408fb42c9a4101efad5b1,Metadata:&PodSandboxMetadata{Name:etcd-old-k8s-version-788237,Uid:2c003f74fd30a9694a059c0d4138e96d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1705375106987675117,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-old-k8s-version-788237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c003f74fd30a9694a059c0d4138e96d,tier: control
-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 2c003f74fd30a9694a059c0d4138e96d,kubernetes.io/config.seen: 2024-01-16T03:18:26.596267947Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:7788241941dcfa5597a07cb17ab04e7385db133cde9f7d3fcb79b38c69f99b44,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-old-k8s-version-788237,Uid:7376ddb4f190a0ded9394063437bcb4e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1705375106982705959,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-788237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 7376ddb4f190a0ded9394063437bcb4e,kubernetes.io/config.seen: 2024-01-16T03:18:26.587964681Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:d6c1775cb397c96897a8de69200128
e362f2c5848fb82bc56c1bda139fd469de,Metadata:&PodSandboxMetadata{Name:kube-scheduler-old-k8s-version-788237,Uid:b3d303074fe0ca1d42a8bd9ed248df09,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1705375106978217493,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-788237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: b3d303074fe0ca1d42a8bd9ed248df09,kubernetes.io/config.seen: 2024-01-16T03:18:26.590675663Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:bb471801af694890715cc7a3b8d88168a940912300e3f82a83b4293f06d11dea,Metadata:&PodSandboxMetadata{Name:kube-apiserver-old-k8s-version-788237,Uid:6230b846da04f59ee8bd2493df23aee6,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1705374811151553828,Labels:map[string]string{component: kube-apiserver,io.k
ubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-788237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6230b846da04f59ee8bd2493df23aee6,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 6230b846da04f59ee8bd2493df23aee6,kubernetes.io/config.seen: 2024-01-16T03:13:30.275973001Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=3edc3cd1-9120-42ec-8fdd-2978d7d33743 name=/runtime.v1alpha2.RuntimeService/ListPodSandbox
	Jan 16 03:32:02 old-k8s-version-788237 crio[714]: time="2024-01-16 03:32:02.156763798Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=b8dab143-542b-43a9-a911-4d8817dbc494 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jan 16 03:32:02 old-k8s-version-788237 crio[714]: time="2024-01-16 03:32:02.157192550Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=b8dab143-542b-43a9-a911-4d8817dbc494 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jan 16 03:32:02 old-k8s-version-788237 crio[714]: time="2024-01-16 03:32:02.157594881Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:39f3d7fe5482f24d0405541c570378b116cdad4537f37ecd47a184e266678440,PodSandboxId:a7674dd11cbc95573f5a04933204bad08362a263485fa925f821ab54ea8e422a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705375136967703370,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa5fac91-1606-4716-a04a-18e9d80c926b,},Annotations:map[string]string{io.kubernetes.container.hash: 3311cea0,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccddac0572d056ac2605ff0ddb55170ab09f4e0657bcb57b1a8faab6d748f1ae,PodSandboxId:b9cd1654d0aa8be03fd80ca5af350b324f6196fea89934eb5d6d5dd938c924ce,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1705375136457781786,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tv7gz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1bf5e59-b2ae-489c-8297-25ad7c456303,},Annotations:map[string]string{io.kubernetes.container.hash: 52d989bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd98624191993b478451414500a1a65222fd5fa43a8947d332d980468fc0a67a,PodSandboxId:be8a8f92ce56eb65b9b241795b17f6a11b3acf0d096a5a926884629c8906873b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1705375135624333509,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-qmzl6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e4b23ca-c18b-4158-b15b-df53326b384c,},Annotations:map[string]string{io.kubernetes.container.hash: 4a86af59,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e327d721f3f2f3b4ab0bbb5f1278403fea05f4be2b069c149c00c4a8c8a45fff,PodSandboxId:8db08f325ea8f04c81080318269bee2998c25d84a01408fb42c9a4101efad5b1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1705375109035882891,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-788237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c003f74fd30a9694a059c0d4138e96d,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 1a366b22,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c8ff8ca133a1b2c457758f11318941da96fd0179f38c83dff5592244df7855b,PodSandboxId:7788241941dcfa5597a07cb17ab04e7385db133cde9f7d3fcb79b38c69f99b44,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1705375107770737585,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-788237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437
bcb4e,},Annotations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0478c9a69e8126feecf553fe27b1082236f91220b1da2fa67d23a45755014ee9,PodSandboxId:d6c1775cb397c96897a8de69200128e362f2c5848fb82bc56c1bda139fd469de,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1705375107533635031,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-788237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Ann
otations:map[string]string{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f47fadd92bab56c45d6d368728025ede6d264522c3f0cfd263543f5d68ae0e2,PodSandboxId:bb471801af694890715cc7a3b8d88168a940912300e3f82a83b4293f06d11dea,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1705375106921765751,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-788237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6230b846da04f59ee8bd2493df23aee6,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 96857140,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=b8dab143-542b-43a9-a911-4d8817dbc494 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jan 16 03:32:02 old-k8s-version-788237 crio[714]: time="2024-01-16 03:32:02.178426758Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=116f6c50-72c6-4e85-9f47-281f8df5ac30 name=/runtime.v1.RuntimeService/Version
	Jan 16 03:32:02 old-k8s-version-788237 crio[714]: time="2024-01-16 03:32:02.178633110Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=116f6c50-72c6-4e85-9f47-281f8df5ac30 name=/runtime.v1.RuntimeService/Version
	Jan 16 03:32:02 old-k8s-version-788237 crio[714]: time="2024-01-16 03:32:02.180154625Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=8898a08d-46f6-4736-8d99-635c45ec9dbc name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 03:32:02 old-k8s-version-788237 crio[714]: time="2024-01-16 03:32:02.180969286Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705375922180950746,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:114472,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=8898a08d-46f6-4736-8d99-635c45ec9dbc name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 03:32:02 old-k8s-version-788237 crio[714]: time="2024-01-16 03:32:02.181919115Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=79838b8f-00d4-4e23-a018-6df4192f6085 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 03:32:02 old-k8s-version-788237 crio[714]: time="2024-01-16 03:32:02.181996244Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=79838b8f-00d4-4e23-a018-6df4192f6085 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 03:32:02 old-k8s-version-788237 crio[714]: time="2024-01-16 03:32:02.182214229Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:39f3d7fe5482f24d0405541c570378b116cdad4537f37ecd47a184e266678440,PodSandboxId:a7674dd11cbc95573f5a04933204bad08362a263485fa925f821ab54ea8e422a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705375136967703370,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa5fac91-1606-4716-a04a-18e9d80c926b,},Annotations:map[string]string{io.kubernetes.container.hash: 3311cea0,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccddac0572d056ac2605ff0ddb55170ab09f4e0657bcb57b1a8faab6d748f1ae,PodSandboxId:b9cd1654d0aa8be03fd80ca5af350b324f6196fea89934eb5d6d5dd938c924ce,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1705375136457781786,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tv7gz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1bf5e59-b2ae-489c-8297-25ad7c456303,},Annotations:map[string]string{io.kubernetes.container.hash: 52d989bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd98624191993b478451414500a1a65222fd5fa43a8947d332d980468fc0a67a,PodSandboxId:be8a8f92ce56eb65b9b241795b17f6a11b3acf0d096a5a926884629c8906873b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1705375135624333509,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-qmzl6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e4b23ca-c18b-4158-b15b-df53326b384c,},Annotations:map[string]string{io.kubernetes.container.hash: 4a86af59,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e327d721f3f2f3b4ab0bbb5f1278403fea05f4be2b069c149c00c4a8c8a45fff,PodSandboxId:8db08f325ea8f04c81080318269bee2998c25d84a01408fb42c9a4101efad5b1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1705375109035882891,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-788237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c003f74fd30a9694a059c0d4138e96d,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 1a366b22,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c8ff8ca133a1b2c457758f11318941da96fd0179f38c83dff5592244df7855b,PodSandboxId:7788241941dcfa5597a07cb17ab04e7385db133cde9f7d3fcb79b38c69f99b44,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1705375107770737585,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-788237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437
bcb4e,},Annotations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0478c9a69e8126feecf553fe27b1082236f91220b1da2fa67d23a45755014ee9,PodSandboxId:d6c1775cb397c96897a8de69200128e362f2c5848fb82bc56c1bda139fd469de,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1705375107533635031,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-788237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Ann
otations:map[string]string{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f47fadd92bab56c45d6d368728025ede6d264522c3f0cfd263543f5d68ae0e2,PodSandboxId:bb471801af694890715cc7a3b8d88168a940912300e3f82a83b4293f06d11dea,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1705375106921765751,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-788237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6230b846da04f59ee8bd2493df23aee6,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 96857140,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c79c8713cf40509eb13eda6bc933e764ae4bb84455c0a1ed89ca21d32bc667cc,PodSandboxId:bb471801af694890715cc7a3b8d88168a940912300e3f82a83b4293f06d11dea,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_EXITED,CreatedAt:1705374811839259892,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-788237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6230b846da04f59ee8bd2493df23aee6,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 96857140,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=79838b8f-00d4-4e23-a018-6df4192f6085 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 03:32:02 old-k8s-version-788237 crio[714]: time="2024-01-16 03:32:02.228700307Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=69ad6221-2188-4a73-96c6-b94b25dd1b5b name=/runtime.v1.RuntimeService/Version
	Jan 16 03:32:02 old-k8s-version-788237 crio[714]: time="2024-01-16 03:32:02.228768029Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=69ad6221-2188-4a73-96c6-b94b25dd1b5b name=/runtime.v1.RuntimeService/Version
	Jan 16 03:32:02 old-k8s-version-788237 crio[714]: time="2024-01-16 03:32:02.230347743Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=be679d8e-abbf-44f7-90af-75c18f68bed1 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 03:32:02 old-k8s-version-788237 crio[714]: time="2024-01-16 03:32:02.231272000Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705375922231238135,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:114472,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=be679d8e-abbf-44f7-90af-75c18f68bed1 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 03:32:02 old-k8s-version-788237 crio[714]: time="2024-01-16 03:32:02.232409407Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=aedf027d-a2f1-481c-9b27-84030ad365d1 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 03:32:02 old-k8s-version-788237 crio[714]: time="2024-01-16 03:32:02.232577646Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=aedf027d-a2f1-481c-9b27-84030ad365d1 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 03:32:02 old-k8s-version-788237 crio[714]: time="2024-01-16 03:32:02.232793778Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:39f3d7fe5482f24d0405541c570378b116cdad4537f37ecd47a184e266678440,PodSandboxId:a7674dd11cbc95573f5a04933204bad08362a263485fa925f821ab54ea8e422a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705375136967703370,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa5fac91-1606-4716-a04a-18e9d80c926b,},Annotations:map[string]string{io.kubernetes.container.hash: 3311cea0,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccddac0572d056ac2605ff0ddb55170ab09f4e0657bcb57b1a8faab6d748f1ae,PodSandboxId:b9cd1654d0aa8be03fd80ca5af350b324f6196fea89934eb5d6d5dd938c924ce,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1705375136457781786,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tv7gz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1bf5e59-b2ae-489c-8297-25ad7c456303,},Annotations:map[string]string{io.kubernetes.container.hash: 52d989bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd98624191993b478451414500a1a65222fd5fa43a8947d332d980468fc0a67a,PodSandboxId:be8a8f92ce56eb65b9b241795b17f6a11b3acf0d096a5a926884629c8906873b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1705375135624333509,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-qmzl6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e4b23ca-c18b-4158-b15b-df53326b384c,},Annotations:map[string]string{io.kubernetes.container.hash: 4a86af59,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e327d721f3f2f3b4ab0bbb5f1278403fea05f4be2b069c149c00c4a8c8a45fff,PodSandboxId:8db08f325ea8f04c81080318269bee2998c25d84a01408fb42c9a4101efad5b1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1705375109035882891,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-788237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c003f74fd30a9694a059c0d4138e96d,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 1a366b22,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c8ff8ca133a1b2c457758f11318941da96fd0179f38c83dff5592244df7855b,PodSandboxId:7788241941dcfa5597a07cb17ab04e7385db133cde9f7d3fcb79b38c69f99b44,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1705375107770737585,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-788237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437
bcb4e,},Annotations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0478c9a69e8126feecf553fe27b1082236f91220b1da2fa67d23a45755014ee9,PodSandboxId:d6c1775cb397c96897a8de69200128e362f2c5848fb82bc56c1bda139fd469de,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1705375107533635031,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-788237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Ann
otations:map[string]string{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f47fadd92bab56c45d6d368728025ede6d264522c3f0cfd263543f5d68ae0e2,PodSandboxId:bb471801af694890715cc7a3b8d88168a940912300e3f82a83b4293f06d11dea,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1705375106921765751,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-788237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6230b846da04f59ee8bd2493df23aee6,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 96857140,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c79c8713cf40509eb13eda6bc933e764ae4bb84455c0a1ed89ca21d32bc667cc,PodSandboxId:bb471801af694890715cc7a3b8d88168a940912300e3f82a83b4293f06d11dea,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_EXITED,CreatedAt:1705374811839259892,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-788237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6230b846da04f59ee8bd2493df23aee6,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 96857140,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=aedf027d-a2f1-481c-9b27-84030ad365d1 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	39f3d7fe5482f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   13 minutes ago      Running             storage-provisioner       0                   a7674dd11cbc9       storage-provisioner
	ccddac0572d05       c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384   13 minutes ago      Running             kube-proxy                0                   b9cd1654d0aa8       kube-proxy-tv7gz
	cd98624191993       bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b   13 minutes ago      Running             coredns                   0                   be8a8f92ce56e       coredns-5644d7b6d9-qmzl6
	e327d721f3f2f       b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed   13 minutes ago      Running             etcd                      0                   8db08f325ea8f       etcd-old-k8s-version-788237
	7c8ff8ca133a1       06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d   13 minutes ago      Running             kube-controller-manager   0                   7788241941dcf       kube-controller-manager-old-k8s-version-788237
	0478c9a69e812       301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a   13 minutes ago      Running             kube-scheduler            0                   d6c1775cb397c       kube-scheduler-old-k8s-version-788237
	3f47fadd92bab       b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e   13 minutes ago      Running             kube-apiserver            1                   bb471801af694       kube-apiserver-old-k8s-version-788237
	c79c8713cf405       b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e   18 minutes ago      Exited              kube-apiserver            0                   bb471801af694       kube-apiserver-old-k8s-version-788237
	
	
	==> coredns [cd98624191993b478451414500a1a65222fd5fa43a8947d332d980468fc0a67a] <==
	.:53
	2024-01-16T03:18:55.969Z [INFO] plugin/reload: Running configuration MD5 = 73c7bdb6903c83cd433a46b2e9eb4233
	2024-01-16T03:18:55.969Z [INFO] CoreDNS-1.6.2
	2024-01-16T03:18:55.969Z [INFO] linux/amd64, go1.12.8, 795a3eb
	CoreDNS-1.6.2
	linux/amd64, go1.12.8, 795a3eb
	2024-01-16T03:18:55.996Z [INFO] 127.0.0.1:42780 - 25246 "HINFO IN 306935609111123163.5757372064153635715. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.02665838s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-788237
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-788237
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=880ec6d67bbc7fe8882aaa6543d5a07427c973fd
	                    minikube.k8s.io/name=old-k8s-version-788237
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_16T03_18_38_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Jan 2024 03:18:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Jan 2024 03:31:34 +0000   Tue, 16 Jan 2024 03:18:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Jan 2024 03:31:34 +0000   Tue, 16 Jan 2024 03:18:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Jan 2024 03:31:34 +0000   Tue, 16 Jan 2024 03:18:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Jan 2024 03:31:34 +0000   Tue, 16 Jan 2024 03:18:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.91
	  Hostname:    old-k8s-version-788237
	Capacity:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	Allocatable:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	System Info:
	 Machine ID:                 6f003db2ea7544b986d77ceb575a7aa0
	 System UUID:                6f003db2-ea75-44b9-86d7-7ceb575a7aa0
	 Boot ID:                    373fd605-6a49-4434-b320-0698ea4aaf5a
	 Kernel Version:             5.10.57
	 OS Image:                   Buildroot 2021.02.12
	 Operating System:           linux
	 Architecture:               amd64
	 Container Runtime Version:  cri-o://1.24.1
	 Kubelet Version:            v1.16.0
	 Kube-Proxy Version:         v1.16.0
	PodCIDR:                     10.244.0.0/24
	PodCIDRs:                    10.244.0.0/24
	Non-terminated Pods:         (8 in total)
	  Namespace                  Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                  ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                coredns-5644d7b6d9-qmzl6                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                etcd-old-k8s-version-788237                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                kube-apiserver-old-k8s-version-788237             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                kube-controller-manager-old-k8s-version-788237    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                kube-proxy-tv7gz                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                kube-scheduler-old-k8s-version-788237             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                metrics-server-74d5856cc6-tx8jt                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                750m (37%!)(MISSING)   0 (0%!)(MISSING)
	  memory             270Mi (12%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From                                Message
	  ----    ------                   ----               ----                                -------
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet, old-k8s-version-788237     Node old-k8s-version-788237 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet, old-k8s-version-788237     Node old-k8s-version-788237 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet, old-k8s-version-788237     Node old-k8s-version-788237 status is now: NodeHasSufficientPID
	  Normal  Starting                 13m                kube-proxy, old-k8s-version-788237  Starting kube-proxy.
	
	
	==> dmesg <==
	[Jan16 03:12] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.070635] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.539427] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[Jan16 03:13] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.153619] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000001] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.445032] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.117977] systemd-fstab-generator[639]: Ignoring "noauto" for root device
	[  +0.125274] systemd-fstab-generator[650]: Ignoring "noauto" for root device
	[  +0.169050] systemd-fstab-generator[664]: Ignoring "noauto" for root device
	[  +0.106971] systemd-fstab-generator[675]: Ignoring "noauto" for root device
	[  +0.248670] systemd-fstab-generator[699]: Ignoring "noauto" for root device
	[ +18.819809] systemd-fstab-generator[1016]: Ignoring "noauto" for root device
	[  +0.490027] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[ +26.975714] kauditd_printk_skb: 13 callbacks suppressed
	[Jan16 03:14] kauditd_printk_skb: 2 callbacks suppressed
	[Jan16 03:18] systemd-fstab-generator[3094]: Ignoring "noauto" for root device
	[  +0.779988] kauditd_printk_skb: 6 callbacks suppressed
	[Jan16 03:19] hrtimer: interrupt took 2584251 ns
	[  +1.203884] kauditd_printk_skb: 4 callbacks suppressed
	
	
	==> etcd [e327d721f3f2f3b4ab0bbb5f1278403fea05f4be2b069c149c00c4a8c8a45fff] <==
	2024-01-16 03:18:29.154981 I | raft: newRaft 3a19c1a50e8a825c [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	2024-01-16 03:18:29.154985 I | raft: 3a19c1a50e8a825c became follower at term 1
	2024-01-16 03:18:29.163581 W | auth: simple token is not cryptographically signed
	2024-01-16 03:18:29.168830 I | etcdserver: starting server... [version: 3.3.15, cluster version: to_be_decided]
	2024-01-16 03:18:29.170880 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, ca = , trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2024-01-16 03:18:29.171088 I | embed: listening for metrics on http://192.168.39.91:2381
	2024-01-16 03:18:29.171372 I | embed: listening for metrics on http://127.0.0.1:2381
	2024-01-16 03:18:29.171635 I | etcdserver: 3a19c1a50e8a825c as single-node; fast-forwarding 9 ticks (election ticks 10)
	2024-01-16 03:18:29.171988 I | etcdserver/membership: added member 3a19c1a50e8a825c [https://192.168.39.91:2380] to cluster 674de9ca81299bdc
	2024-01-16 03:18:29.955421 I | raft: 3a19c1a50e8a825c is starting a new election at term 1
	2024-01-16 03:18:29.955609 I | raft: 3a19c1a50e8a825c became candidate at term 2
	2024-01-16 03:18:29.955638 I | raft: 3a19c1a50e8a825c received MsgVoteResp from 3a19c1a50e8a825c at term 2
	2024-01-16 03:18:29.955661 I | raft: 3a19c1a50e8a825c became leader at term 2
	2024-01-16 03:18:29.955677 I | raft: raft.node: 3a19c1a50e8a825c elected leader 3a19c1a50e8a825c at term 2
	2024-01-16 03:18:29.956259 I | etcdserver: setting up the initial cluster version to 3.3
	2024-01-16 03:18:29.956679 I | etcdserver: published {Name:old-k8s-version-788237 ClientURLs:[https://192.168.39.91:2379]} to cluster 674de9ca81299bdc
	2024-01-16 03:18:29.957032 I | embed: ready to serve client requests
	2024-01-16 03:18:29.957893 N | etcdserver/membership: set the initial cluster version to 3.3
	2024-01-16 03:18:29.957975 I | etcdserver/api: enabled capabilities for version 3.3
	2024-01-16 03:18:29.958102 I | embed: ready to serve client requests
	2024-01-16 03:18:29.959253 I | embed: serving client requests on 127.0.0.1:2379
	2024-01-16 03:18:29.960674 I | embed: serving client requests on 192.168.39.91:2379
	2024-01-16 03:18:55.245295 W | etcdserver: read-only range request "key:\"/registry/deployments/kube-system/metrics-server\" " with result "range_response_count:1 size:2853" took too long (103.903871ms) to execute
	2024-01-16 03:28:30.406088 I | mvcc: store.index: compact 664
	2024-01-16 03:28:30.412600 I | mvcc: finished scheduled compaction at 664 (took 5.515886ms)
	
	
	==> kernel <==
	 03:32:02 up 19 min,  0 users,  load average: 0.29, 0.37, 0.27
	Linux old-k8s-version-788237 5.10.57 #1 SMP Thu Dec 28 22:04:21 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [3f47fadd92bab56c45d6d368728025ede6d264522c3f0cfd263543f5d68ae0e2] <==
	I0116 03:24:34.796281       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0116 03:24:34.796585       1 handler_proxy.go:99] no RequestInfo found in the context
	E0116 03:24:34.796707       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0116 03:24:34.796739       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0116 03:26:34.797165       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0116 03:26:34.797595       1 handler_proxy.go:99] no RequestInfo found in the context
	E0116 03:26:34.797703       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0116 03:26:34.797728       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0116 03:28:34.798867       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0116 03:28:34.799279       1 handler_proxy.go:99] no RequestInfo found in the context
	E0116 03:28:34.799446       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0116 03:28:34.799560       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0116 03:29:34.799843       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0116 03:29:34.800191       1 handler_proxy.go:99] no RequestInfo found in the context
	E0116 03:29:34.800301       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0116 03:29:34.800384       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0116 03:31:34.800968       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0116 03:31:34.801123       1 handler_proxy.go:99] no RequestInfo found in the context
	E0116 03:31:34.801251       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0116 03:31:34.801263       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [c79c8713cf40509eb13eda6bc933e764ae4bb84455c0a1ed89ca21d32bc667cc] <==
	W0116 03:18:22.685337       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0116 03:18:22.685285       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0116 03:18:22.685394       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0116 03:18:22.687650       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0116 03:18:22.687831       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0116 03:18:22.687884       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0116 03:18:22.687888       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0116 03:18:22.688046       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0116 03:18:22.688712       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0116 03:18:22.688781       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0116 03:18:22.688828       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0116 03:18:22.688829       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0116 03:18:22.688863       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0116 03:18:23.973382       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0116 03:18:23.974995       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0116 03:18:24.006764       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0116 03:18:24.020625       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0116 03:18:24.024026       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0116 03:18:24.039443       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0116 03:18:24.043840       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0116 03:18:24.083238       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0116 03:18:24.085915       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0116 03:18:24.089106       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0116 03:18:24.118713       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0116 03:18:24.123982       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	
	
	==> kube-controller-manager [7c8ff8ca133a1b2c457758f11318941da96fd0179f38c83dff5592244df7855b] <==
	W0116 03:25:50.063344       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0116 03:25:57.581358       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0116 03:26:22.065809       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0116 03:26:27.834090       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0116 03:26:54.069391       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0116 03:26:58.086287       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0116 03:27:26.071753       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0116 03:27:28.338939       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0116 03:27:58.074221       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0116 03:27:58.591057       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	E0116 03:28:28.843601       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0116 03:28:30.076943       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0116 03:28:59.096227       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0116 03:29:02.079188       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0116 03:29:29.348789       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0116 03:29:34.081419       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0116 03:29:59.601136       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0116 03:30:06.083440       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0116 03:30:29.853370       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0116 03:30:38.085891       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0116 03:31:00.105980       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0116 03:31:10.088370       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0116 03:31:30.358262       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0116 03:31:42.090918       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0116 03:32:00.610655       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	
	
	==> kube-proxy [ccddac0572d056ac2605ff0ddb55170ab09f4e0657bcb57b1a8faab6d748f1ae] <==
	W0116 03:18:56.906029       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I0116 03:18:56.917994       1 node.go:135] Successfully retrieved node IP: 192.168.39.91
	I0116 03:18:56.918059       1 server_others.go:149] Using iptables Proxier.
	I0116 03:18:56.918692       1 server.go:529] Version: v1.16.0
	I0116 03:18:56.926983       1 config.go:313] Starting service config controller
	I0116 03:18:56.927050       1 shared_informer.go:197] Waiting for caches to sync for service config
	I0116 03:18:56.927235       1 config.go:131] Starting endpoints config controller
	I0116 03:18:56.927250       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I0116 03:18:57.030706       1 shared_informer.go:204] Caches are synced for endpoints config 
	I0116 03:18:57.030814       1 shared_informer.go:204] Caches are synced for service config 
	
	
	==> kube-scheduler [0478c9a69e8126feecf553fe27b1082236f91220b1da2fa67d23a45755014ee9] <==
	I0116 03:18:33.809936       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
	I0116 03:18:33.810798       1 secure_serving.go:123] Serving securely on 127.0.0.1:10259
	E0116 03:18:33.845650       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0116 03:18:33.862258       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0116 03:18:33.862431       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0116 03:18:33.863518       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0116 03:18:33.863638       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0116 03:18:33.865597       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0116 03:18:33.865680       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0116 03:18:33.865713       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0116 03:18:33.865754       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0116 03:18:33.865784       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0116 03:18:33.866803       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0116 03:18:34.847580       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0116 03:18:34.864577       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0116 03:18:34.868137       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0116 03:18:34.873391       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0116 03:18:34.877735       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0116 03:18:34.881272       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0116 03:18:34.883349       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0116 03:18:34.886121       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0116 03:18:34.887661       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0116 03:18:34.890590       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0116 03:18:34.891927       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0116 03:18:53.634997       1 factory.go:585] pod is already present in the activeQ
	
	
	==> kubelet <==
	-- Journal begins at Tue 2024-01-16 03:13:01 UTC, ends at Tue 2024-01-16 03:32:02 UTC. --
	Jan 16 03:27:25 old-k8s-version-788237 kubelet[3100]: E0116 03:27:25.150289    3100 pod_workers.go:191] Error syncing pod 790d860a-d27f-4535-94d8-64f40cb79071 ("metrics-server-74d5856cc6-tx8jt_kube-system(790d860a-d27f-4535-94d8-64f40cb79071)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 16 03:27:38 old-k8s-version-788237 kubelet[3100]: E0116 03:27:38.150038    3100 pod_workers.go:191] Error syncing pod 790d860a-d27f-4535-94d8-64f40cb79071 ("metrics-server-74d5856cc6-tx8jt_kube-system(790d860a-d27f-4535-94d8-64f40cb79071)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 16 03:27:51 old-k8s-version-788237 kubelet[3100]: E0116 03:27:51.150845    3100 pod_workers.go:191] Error syncing pod 790d860a-d27f-4535-94d8-64f40cb79071 ("metrics-server-74d5856cc6-tx8jt_kube-system(790d860a-d27f-4535-94d8-64f40cb79071)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 16 03:28:06 old-k8s-version-788237 kubelet[3100]: E0116 03:28:06.151000    3100 pod_workers.go:191] Error syncing pod 790d860a-d27f-4535-94d8-64f40cb79071 ("metrics-server-74d5856cc6-tx8jt_kube-system(790d860a-d27f-4535-94d8-64f40cb79071)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 16 03:28:17 old-k8s-version-788237 kubelet[3100]: E0116 03:28:17.150637    3100 pod_workers.go:191] Error syncing pod 790d860a-d27f-4535-94d8-64f40cb79071 ("metrics-server-74d5856cc6-tx8jt_kube-system(790d860a-d27f-4535-94d8-64f40cb79071)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 16 03:28:26 old-k8s-version-788237 kubelet[3100]: E0116 03:28:26.317778    3100 container_manager_linux.go:510] failed to find cgroups of kubelet - cpu and memory cgroup hierarchy not unified.  cpu: /, memory: /system.slice/kubelet.service
	Jan 16 03:28:28 old-k8s-version-788237 kubelet[3100]: E0116 03:28:28.150416    3100 pod_workers.go:191] Error syncing pod 790d860a-d27f-4535-94d8-64f40cb79071 ("metrics-server-74d5856cc6-tx8jt_kube-system(790d860a-d27f-4535-94d8-64f40cb79071)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 16 03:28:43 old-k8s-version-788237 kubelet[3100]: E0116 03:28:43.150394    3100 pod_workers.go:191] Error syncing pod 790d860a-d27f-4535-94d8-64f40cb79071 ("metrics-server-74d5856cc6-tx8jt_kube-system(790d860a-d27f-4535-94d8-64f40cb79071)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 16 03:28:58 old-k8s-version-788237 kubelet[3100]: E0116 03:28:58.150381    3100 pod_workers.go:191] Error syncing pod 790d860a-d27f-4535-94d8-64f40cb79071 ("metrics-server-74d5856cc6-tx8jt_kube-system(790d860a-d27f-4535-94d8-64f40cb79071)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 16 03:29:13 old-k8s-version-788237 kubelet[3100]: E0116 03:29:13.150244    3100 pod_workers.go:191] Error syncing pod 790d860a-d27f-4535-94d8-64f40cb79071 ("metrics-server-74d5856cc6-tx8jt_kube-system(790d860a-d27f-4535-94d8-64f40cb79071)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 16 03:29:27 old-k8s-version-788237 kubelet[3100]: E0116 03:29:27.150617    3100 pod_workers.go:191] Error syncing pod 790d860a-d27f-4535-94d8-64f40cb79071 ("metrics-server-74d5856cc6-tx8jt_kube-system(790d860a-d27f-4535-94d8-64f40cb79071)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 16 03:29:39 old-k8s-version-788237 kubelet[3100]: E0116 03:29:39.150812    3100 pod_workers.go:191] Error syncing pod 790d860a-d27f-4535-94d8-64f40cb79071 ("metrics-server-74d5856cc6-tx8jt_kube-system(790d860a-d27f-4535-94d8-64f40cb79071)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 16 03:29:53 old-k8s-version-788237 kubelet[3100]: E0116 03:29:53.166158    3100 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jan 16 03:29:53 old-k8s-version-788237 kubelet[3100]: E0116 03:29:53.166295    3100 kuberuntime_image.go:50] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jan 16 03:29:53 old-k8s-version-788237 kubelet[3100]: E0116 03:29:53.166365    3100 kuberuntime_manager.go:783] container start failed: ErrImagePull: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jan 16 03:29:53 old-k8s-version-788237 kubelet[3100]: E0116 03:29:53.166408    3100 pod_workers.go:191] Error syncing pod 790d860a-d27f-4535-94d8-64f40cb79071 ("metrics-server-74d5856cc6-tx8jt_kube-system(790d860a-d27f-4535-94d8-64f40cb79071)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	Jan 16 03:30:04 old-k8s-version-788237 kubelet[3100]: E0116 03:30:04.153172    3100 pod_workers.go:191] Error syncing pod 790d860a-d27f-4535-94d8-64f40cb79071 ("metrics-server-74d5856cc6-tx8jt_kube-system(790d860a-d27f-4535-94d8-64f40cb79071)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 16 03:30:18 old-k8s-version-788237 kubelet[3100]: E0116 03:30:18.150206    3100 pod_workers.go:191] Error syncing pod 790d860a-d27f-4535-94d8-64f40cb79071 ("metrics-server-74d5856cc6-tx8jt_kube-system(790d860a-d27f-4535-94d8-64f40cb79071)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 16 03:30:33 old-k8s-version-788237 kubelet[3100]: E0116 03:30:33.151185    3100 pod_workers.go:191] Error syncing pod 790d860a-d27f-4535-94d8-64f40cb79071 ("metrics-server-74d5856cc6-tx8jt_kube-system(790d860a-d27f-4535-94d8-64f40cb79071)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 16 03:30:48 old-k8s-version-788237 kubelet[3100]: E0116 03:30:48.150173    3100 pod_workers.go:191] Error syncing pod 790d860a-d27f-4535-94d8-64f40cb79071 ("metrics-server-74d5856cc6-tx8jt_kube-system(790d860a-d27f-4535-94d8-64f40cb79071)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 16 03:31:03 old-k8s-version-788237 kubelet[3100]: E0116 03:31:03.150364    3100 pod_workers.go:191] Error syncing pod 790d860a-d27f-4535-94d8-64f40cb79071 ("metrics-server-74d5856cc6-tx8jt_kube-system(790d860a-d27f-4535-94d8-64f40cb79071)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 16 03:31:18 old-k8s-version-788237 kubelet[3100]: E0116 03:31:18.150635    3100 pod_workers.go:191] Error syncing pod 790d860a-d27f-4535-94d8-64f40cb79071 ("metrics-server-74d5856cc6-tx8jt_kube-system(790d860a-d27f-4535-94d8-64f40cb79071)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 16 03:31:30 old-k8s-version-788237 kubelet[3100]: E0116 03:31:30.150188    3100 pod_workers.go:191] Error syncing pod 790d860a-d27f-4535-94d8-64f40cb79071 ("metrics-server-74d5856cc6-tx8jt_kube-system(790d860a-d27f-4535-94d8-64f40cb79071)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 16 03:31:41 old-k8s-version-788237 kubelet[3100]: E0116 03:31:41.150859    3100 pod_workers.go:191] Error syncing pod 790d860a-d27f-4535-94d8-64f40cb79071 ("metrics-server-74d5856cc6-tx8jt_kube-system(790d860a-d27f-4535-94d8-64f40cb79071)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 16 03:31:56 old-k8s-version-788237 kubelet[3100]: E0116 03:31:56.150675    3100 pod_workers.go:191] Error syncing pod 790d860a-d27f-4535-94d8-64f40cb79071 ("metrics-server-74d5856cc6-tx8jt_kube-system(790d860a-d27f-4535-94d8-64f40cb79071)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	
	
	==> storage-provisioner [39f3d7fe5482f24d0405541c570378b116cdad4537f37ecd47a184e266678440] <==
	I0116 03:18:57.180332       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0116 03:18:57.193905       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0116 03:18:57.193979       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0116 03:18:57.203549       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0116 03:18:57.203725       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-788237_9dbd0fef-2950-40a3-bfce-0a7c3322bd4e!
	I0116 03:18:57.207432       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a352f12e-5d84-4668-bd31-56150fefa2b8", APIVersion:"v1", ResourceVersion:"421", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-788237_9dbd0fef-2950-40a3-bfce-0a7c3322bd4e became leader
	I0116 03:18:57.304411       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-788237_9dbd0fef-2950-40a3-bfce-0a7c3322bd4e!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-788237 -n old-k8s-version-788237
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-788237 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-74d5856cc6-tx8jt
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-788237 describe pod metrics-server-74d5856cc6-tx8jt
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-788237 describe pod metrics-server-74d5856cc6-tx8jt: exit status 1 (76.447403ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-74d5856cc6-tx8jt" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-788237 describe pod metrics-server-74d5856cc6-tx8jt: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (174.82s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (77.47s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-775571 -n default-k8s-diff-port-775571
start_stop_delete_test.go:287: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-01-16 03:33:50.080090576 +0000 UTC m=+5607.963917151
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-775571 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-775571 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.569µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-775571 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-775571 -n default-k8s-diff-port-775571
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-775571 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-775571 logs -n 25: (1.436687738s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p no-preload-934668                                   | no-preload-934668            | jenkins | v1.32.0 | 16 Jan 24 03:05 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-480663            | embed-certs-480663           | jenkins | v1.32.0 | 16 Jan 24 03:05 UTC | 16 Jan 24 03:05 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-480663                                  | embed-certs-480663           | jenkins | v1.32.0 | 16 Jan 24 03:05 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-788237        | old-k8s-version-788237       | jenkins | v1.32.0 | 16 Jan 24 03:05 UTC | 16 Jan 24 03:05 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-788237                              | old-k8s-version-788237       | jenkins | v1.32.0 | 16 Jan 24 03:05 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-775571  | default-k8s-diff-port-775571 | jenkins | v1.32.0 | 16 Jan 24 03:06 UTC | 16 Jan 24 03:06 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-775571 | jenkins | v1.32.0 | 16 Jan 24 03:06 UTC |                     |
	|         | default-k8s-diff-port-775571                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-934668                  | no-preload-934668            | jenkins | v1.32.0 | 16 Jan 24 03:07 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-480663                 | embed-certs-480663           | jenkins | v1.32.0 | 16 Jan 24 03:07 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-934668                                   | no-preload-934668            | jenkins | v1.32.0 | 16 Jan 24 03:07 UTC | 16 Jan 24 03:24 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| start   | -p embed-certs-480663                                  | embed-certs-480663           | jenkins | v1.32.0 | 16 Jan 24 03:07 UTC | 16 Jan 24 03:17 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-788237             | old-k8s-version-788237       | jenkins | v1.32.0 | 16 Jan 24 03:08 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-788237                              | old-k8s-version-788237       | jenkins | v1.32.0 | 16 Jan 24 03:08 UTC | 16 Jan 24 03:20 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-775571       | default-k8s-diff-port-775571 | jenkins | v1.32.0 | 16 Jan 24 03:08 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-775571 | jenkins | v1.32.0 | 16 Jan 24 03:08 UTC | 16 Jan 24 03:23 UTC |
	|         | default-k8s-diff-port-775571                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-788237                              | old-k8s-version-788237       | jenkins | v1.32.0 | 16 Jan 24 03:32 UTC | 16 Jan 24 03:32 UTC |
	| start   | -p newest-cni-190843 --memory=2200 --alsologtostderr   | newest-cni-190843            | jenkins | v1.32.0 | 16 Jan 24 03:32 UTC | 16 Jan 24 03:33 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| delete  | -p no-preload-934668                                   | no-preload-934668            | jenkins | v1.32.0 | 16 Jan 24 03:32 UTC | 16 Jan 24 03:32 UTC |
	| start   | -p auto-278325 --memory=3072                           | auto-278325                  | jenkins | v1.32.0 | 16 Jan 24 03:32 UTC |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-190843             | newest-cni-190843            | jenkins | v1.32.0 | 16 Jan 24 03:33 UTC | 16 Jan 24 03:33 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-190843                                   | newest-cni-190843            | jenkins | v1.32.0 | 16 Jan 24 03:33 UTC | 16 Jan 24 03:33 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-190843                  | newest-cni-190843            | jenkins | v1.32.0 | 16 Jan 24 03:33 UTC | 16 Jan 24 03:33 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-190843 --memory=2200 --alsologtostderr   | newest-cni-190843            | jenkins | v1.32.0 | 16 Jan 24 03:33 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| delete  | -p embed-certs-480663                                  | embed-certs-480663           | jenkins | v1.32.0 | 16 Jan 24 03:33 UTC | 16 Jan 24 03:33 UTC |
	| start   | -p kindnet-278325                                      | kindnet-278325               | jenkins | v1.32.0 | 16 Jan 24 03:33 UTC |                     |
	|         | --memory=3072                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --cni=kindnet --driver=kvm2                            |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/16 03:33:29
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0116 03:33:29.955161 1018250 out.go:296] Setting OutFile to fd 1 ...
	I0116 03:33:29.955355 1018250 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 03:33:29.955368 1018250 out.go:309] Setting ErrFile to fd 2...
	I0116 03:33:29.955378 1018250 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 03:33:29.955603 1018250 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17967-971255/.minikube/bin
	I0116 03:33:29.956230 1018250 out.go:303] Setting JSON to false
	I0116 03:33:29.957409 1018250 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":15359,"bootTime":1705360651,"procs":220,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1048-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0116 03:33:29.957478 1018250 start.go:138] virtualization: kvm guest
	I0116 03:33:29.959532 1018250 out.go:177] * [kindnet-278325] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0116 03:33:29.961571 1018250 out.go:177]   - MINIKUBE_LOCATION=17967
	I0116 03:33:29.961534 1018250 notify.go:220] Checking for updates...
	I0116 03:33:29.963495 1018250 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0116 03:33:29.965304 1018250 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17967-971255/kubeconfig
	I0116 03:33:29.966956 1018250 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17967-971255/.minikube
	I0116 03:33:29.968536 1018250 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0116 03:33:29.969999 1018250 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0116 03:33:29.971850 1018250 config.go:182] Loaded profile config "auto-278325": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 03:33:29.971959 1018250 config.go:182] Loaded profile config "default-k8s-diff-port-775571": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 03:33:29.972080 1018250 config.go:182] Loaded profile config "newest-cni-190843": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0116 03:33:29.972180 1018250 driver.go:392] Setting default libvirt URI to qemu:///system
	I0116 03:33:30.015372 1018250 out.go:177] * Using the kvm2 driver based on user configuration
	I0116 03:33:30.017028 1018250 start.go:298] selected driver: kvm2
	I0116 03:33:30.017048 1018250 start.go:902] validating driver "kvm2" against <nil>
	I0116 03:33:30.017075 1018250 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0116 03:33:30.018180 1018250 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0116 03:33:30.018290 1018250 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17967-971255/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0116 03:33:30.036088 1018250 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0116 03:33:30.036161 1018250 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0116 03:33:30.036393 1018250 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0116 03:33:30.036476 1018250 cni.go:84] Creating CNI manager for "kindnet"
	I0116 03:33:30.036496 1018250 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I0116 03:33:30.036510 1018250 start_flags.go:321] config:
	{Name:kindnet-278325 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:kindnet-278325 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 03:33:30.036704 1018250 iso.go:125] acquiring lock: {Name:mkc0ce0b4b435c5fb7570521b794e5982bb7bf6d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0116 03:33:30.038579 1018250 out.go:177] * Starting control plane node kindnet-278325 in cluster kindnet-278325
	I0116 03:33:28.972380 1017511 kubeadm.go:322] [apiclient] All control plane components are healthy after 9.004033 seconds
	I0116 03:33:28.972531 1017511 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0116 03:33:29.003276 1017511 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0116 03:33:29.552211 1017511 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0116 03:33:29.552505 1017511 kubeadm.go:322] [mark-control-plane] Marking the node auto-278325 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0116 03:33:30.069137 1017511 kubeadm.go:322] [bootstrap-token] Using token: go7nhh.9j9iva8fcto46aki
	I0116 03:33:30.070816 1017511 out.go:204]   - Configuring RBAC rules ...
	I0116 03:33:30.070975 1017511 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0116 03:33:30.086569 1017511 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0116 03:33:30.100290 1017511 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0116 03:33:30.104762 1017511 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0116 03:33:30.108927 1017511 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0116 03:33:30.113631 1017511 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0116 03:33:30.136595 1017511 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0116 03:33:30.445436 1017511 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0116 03:33:30.493548 1017511 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0116 03:33:30.494716 1017511 kubeadm.go:322] 
	I0116 03:33:30.494821 1017511 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0116 03:33:30.494834 1017511 kubeadm.go:322] 
	I0116 03:33:30.494932 1017511 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0116 03:33:30.494942 1017511 kubeadm.go:322] 
	I0116 03:33:30.494981 1017511 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0116 03:33:30.495059 1017511 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0116 03:33:30.495135 1017511 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0116 03:33:30.495158 1017511 kubeadm.go:322] 
	I0116 03:33:30.495235 1017511 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0116 03:33:30.495242 1017511 kubeadm.go:322] 
	I0116 03:33:30.495316 1017511 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0116 03:33:30.495327 1017511 kubeadm.go:322] 
	I0116 03:33:30.495426 1017511 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0116 03:33:30.495552 1017511 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0116 03:33:30.495658 1017511 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0116 03:33:30.495667 1017511 kubeadm.go:322] 
	I0116 03:33:30.495784 1017511 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0116 03:33:30.495891 1017511 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0116 03:33:30.495905 1017511 kubeadm.go:322] 
	I0116 03:33:30.496022 1017511 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token go7nhh.9j9iva8fcto46aki \
	I0116 03:33:30.496171 1017511 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:2b2d63eda0b757db09586d44c0338fc83976b062d99727312f950d3846e4844b \
	I0116 03:33:30.496234 1017511 kubeadm.go:322] 	--control-plane 
	I0116 03:33:30.496254 1017511 kubeadm.go:322] 
	I0116 03:33:30.496373 1017511 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0116 03:33:30.496387 1017511 kubeadm.go:322] 
	I0116 03:33:30.496510 1017511 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token go7nhh.9j9iva8fcto46aki \
	I0116 03:33:30.496653 1017511 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:2b2d63eda0b757db09586d44c0338fc83976b062d99727312f950d3846e4844b 
	I0116 03:33:30.497003 1017511 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0116 03:33:30.497043 1017511 cni.go:84] Creating CNI manager for ""
	I0116 03:33:30.497072 1017511 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 03:33:30.499299 1017511 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0116 03:33:26.902694 1017941 main.go:141] libmachine: (newest-cni-190843) DBG | domain newest-cni-190843 has defined MAC address 52:54:00:b0:40:c6 in network mk-newest-cni-190843
	I0116 03:33:26.902739 1017941 main.go:141] libmachine: (newest-cni-190843) DBG | unable to find current IP address of domain newest-cni-190843 in network mk-newest-cni-190843
	I0116 03:33:26.902795 1017941 main.go:141] libmachine: (newest-cni-190843) DBG | I0116 03:33:26.902732 1017976 retry.go:31] will retry after 2.02873104s: waiting for machine to come up
	I0116 03:33:28.935089 1017941 main.go:141] libmachine: (newest-cni-190843) DBG | domain newest-cni-190843 has defined MAC address 52:54:00:b0:40:c6 in network mk-newest-cni-190843
	I0116 03:33:28.935123 1017941 main.go:141] libmachine: (newest-cni-190843) DBG | unable to find current IP address of domain newest-cni-190843 in network mk-newest-cni-190843
	I0116 03:33:28.935141 1017941 main.go:141] libmachine: (newest-cni-190843) DBG | I0116 03:33:28.934675 1017976 retry.go:31] will retry after 3.60229292s: waiting for machine to come up
	I0116 03:33:30.500567 1017511 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0116 03:33:30.528878 1017511 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0116 03:33:30.560924 1017511 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0116 03:33:30.561017 1017511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:33:30.561059 1017511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=880ec6d67bbc7fe8882aaa6543d5a07427c973fd minikube.k8s.io/name=auto-278325 minikube.k8s.io/updated_at=2024_01_16T03_33_30_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:33:30.894512 1017511 ops.go:34] apiserver oom_adj: -16
	I0116 03:33:30.907969 1017511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:33:31.408075 1017511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:33:31.908246 1017511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:33:30.040019 1018250 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0116 03:33:30.040073 1018250 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17967-971255/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0116 03:33:30.040089 1018250 cache.go:56] Caching tarball of preloaded images
	I0116 03:33:30.040180 1018250 preload.go:174] Found /home/jenkins/minikube-integration/17967-971255/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0116 03:33:30.040193 1018250 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0116 03:33:30.040311 1018250 profile.go:148] Saving config to /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/kindnet-278325/config.json ...
	I0116 03:33:30.040336 1018250 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/kindnet-278325/config.json: {Name:mk31ddb6e6d9a6b5fb742eb763860f2852cda9a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:33:30.040497 1018250 start.go:365] acquiring machines lock for kindnet-278325: {Name:mk61734672272adcae1bdaf20e72828e69d219db Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0116 03:33:32.538952 1017941 main.go:141] libmachine: (newest-cni-190843) DBG | domain newest-cni-190843 has defined MAC address 52:54:00:b0:40:c6 in network mk-newest-cni-190843
	I0116 03:33:32.539391 1017941 main.go:141] libmachine: (newest-cni-190843) DBG | unable to find current IP address of domain newest-cni-190843 in network mk-newest-cni-190843
	I0116 03:33:32.539418 1017941 main.go:141] libmachine: (newest-cni-190843) DBG | I0116 03:33:32.539346 1017976 retry.go:31] will retry after 4.252823491s: waiting for machine to come up
	I0116 03:33:32.408307 1017511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:33:32.908873 1017511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:33:33.408768 1017511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:33:33.908201 1017511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:33:34.408360 1017511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:33:34.908251 1017511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:33:35.408382 1017511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:33:35.908894 1017511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:33:36.408647 1017511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:33:36.907982 1017511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:33:38.008724 1018250 start.go:369] acquired machines lock for "kindnet-278325" in 7.968184584s
	I0116 03:33:38.008805 1018250 start.go:93] Provisioning new machine with config: &{Name:kindnet-278325 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.4 ClusterName:kindnet-278325 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0116 03:33:38.008948 1018250 start.go:125] createHost starting for "" (driver="kvm2")
	I0116 03:33:36.793384 1017941 main.go:141] libmachine: (newest-cni-190843) DBG | domain newest-cni-190843 has defined MAC address 52:54:00:b0:40:c6 in network mk-newest-cni-190843
	I0116 03:33:36.793833 1017941 main.go:141] libmachine: (newest-cni-190843) Found IP for machine: 192.168.39.3
	I0116 03:33:36.793857 1017941 main.go:141] libmachine: (newest-cni-190843) DBG | domain newest-cni-190843 has current primary IP address 192.168.39.3 and MAC address 52:54:00:b0:40:c6 in network mk-newest-cni-190843
	I0116 03:33:36.793867 1017941 main.go:141] libmachine: (newest-cni-190843) Reserving static IP address...
	I0116 03:33:36.794413 1017941 main.go:141] libmachine: (newest-cni-190843) DBG | found host DHCP lease matching {name: "newest-cni-190843", mac: "52:54:00:b0:40:c6", ip: "192.168.39.3"} in network mk-newest-cni-190843: {Iface:virbr3 ExpiryTime:2024-01-16 04:33:30 +0000 UTC Type:0 Mac:52:54:00:b0:40:c6 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:newest-cni-190843 Clientid:01:52:54:00:b0:40:c6}
	I0116 03:33:36.794454 1017941 main.go:141] libmachine: (newest-cni-190843) Reserved static IP address: 192.168.39.3
	I0116 03:33:36.794486 1017941 main.go:141] libmachine: (newest-cni-190843) DBG | skip adding static IP to network mk-newest-cni-190843 - found existing host DHCP lease matching {name: "newest-cni-190843", mac: "52:54:00:b0:40:c6", ip: "192.168.39.3"}
	I0116 03:33:36.794512 1017941 main.go:141] libmachine: (newest-cni-190843) DBG | Getting to WaitForSSH function...
	I0116 03:33:36.794536 1017941 main.go:141] libmachine: (newest-cni-190843) Waiting for SSH to be available...
	I0116 03:33:36.796760 1017941 main.go:141] libmachine: (newest-cni-190843) DBG | domain newest-cni-190843 has defined MAC address 52:54:00:b0:40:c6 in network mk-newest-cni-190843
	I0116 03:33:36.797153 1017941 main.go:141] libmachine: (newest-cni-190843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:40:c6", ip: ""} in network mk-newest-cni-190843: {Iface:virbr3 ExpiryTime:2024-01-16 04:33:30 +0000 UTC Type:0 Mac:52:54:00:b0:40:c6 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:newest-cni-190843 Clientid:01:52:54:00:b0:40:c6}
	I0116 03:33:36.797181 1017941 main.go:141] libmachine: (newest-cni-190843) DBG | domain newest-cni-190843 has defined IP address 192.168.39.3 and MAC address 52:54:00:b0:40:c6 in network mk-newest-cni-190843
	I0116 03:33:36.797277 1017941 main.go:141] libmachine: (newest-cni-190843) DBG | Using SSH client type: external
	I0116 03:33:36.797305 1017941 main.go:141] libmachine: (newest-cni-190843) DBG | Using SSH private key: /home/jenkins/minikube-integration/17967-971255/.minikube/machines/newest-cni-190843/id_rsa (-rw-------)
	I0116 03:33:36.797337 1017941 main.go:141] libmachine: (newest-cni-190843) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.3 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17967-971255/.minikube/machines/newest-cni-190843/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0116 03:33:36.797351 1017941 main.go:141] libmachine: (newest-cni-190843) DBG | About to run SSH command:
	I0116 03:33:36.797360 1017941 main.go:141] libmachine: (newest-cni-190843) DBG | exit 0
	I0116 03:33:36.885976 1017941 main.go:141] libmachine: (newest-cni-190843) DBG | SSH cmd err, output: <nil>: 
	I0116 03:33:36.886418 1017941 main.go:141] libmachine: (newest-cni-190843) Calling .GetConfigRaw
	I0116 03:33:36.887129 1017941 main.go:141] libmachine: (newest-cni-190843) Calling .GetIP
	I0116 03:33:36.889554 1017941 main.go:141] libmachine: (newest-cni-190843) DBG | domain newest-cni-190843 has defined MAC address 52:54:00:b0:40:c6 in network mk-newest-cni-190843
	I0116 03:33:36.889942 1017941 main.go:141] libmachine: (newest-cni-190843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:40:c6", ip: ""} in network mk-newest-cni-190843: {Iface:virbr3 ExpiryTime:2024-01-16 04:33:30 +0000 UTC Type:0 Mac:52:54:00:b0:40:c6 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:newest-cni-190843 Clientid:01:52:54:00:b0:40:c6}
	I0116 03:33:36.889979 1017941 main.go:141] libmachine: (newest-cni-190843) DBG | domain newest-cni-190843 has defined IP address 192.168.39.3 and MAC address 52:54:00:b0:40:c6 in network mk-newest-cni-190843
	I0116 03:33:36.890171 1017941 profile.go:148] Saving config to /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/newest-cni-190843/config.json ...
	I0116 03:33:36.890383 1017941 machine.go:88] provisioning docker machine ...
	I0116 03:33:36.890413 1017941 main.go:141] libmachine: (newest-cni-190843) Calling .DriverName
	I0116 03:33:36.890659 1017941 main.go:141] libmachine: (newest-cni-190843) Calling .GetMachineName
	I0116 03:33:36.890833 1017941 buildroot.go:166] provisioning hostname "newest-cni-190843"
	I0116 03:33:36.890856 1017941 main.go:141] libmachine: (newest-cni-190843) Calling .GetMachineName
	I0116 03:33:36.891012 1017941 main.go:141] libmachine: (newest-cni-190843) Calling .GetSSHHostname
	I0116 03:33:36.893435 1017941 main.go:141] libmachine: (newest-cni-190843) DBG | domain newest-cni-190843 has defined MAC address 52:54:00:b0:40:c6 in network mk-newest-cni-190843
	I0116 03:33:36.893818 1017941 main.go:141] libmachine: (newest-cni-190843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:40:c6", ip: ""} in network mk-newest-cni-190843: {Iface:virbr3 ExpiryTime:2024-01-16 04:33:30 +0000 UTC Type:0 Mac:52:54:00:b0:40:c6 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:newest-cni-190843 Clientid:01:52:54:00:b0:40:c6}
	I0116 03:33:36.893873 1017941 main.go:141] libmachine: (newest-cni-190843) DBG | domain newest-cni-190843 has defined IP address 192.168.39.3 and MAC address 52:54:00:b0:40:c6 in network mk-newest-cni-190843
	I0116 03:33:36.893979 1017941 main.go:141] libmachine: (newest-cni-190843) Calling .GetSSHPort
	I0116 03:33:36.894184 1017941 main.go:141] libmachine: (newest-cni-190843) Calling .GetSSHKeyPath
	I0116 03:33:36.894384 1017941 main.go:141] libmachine: (newest-cni-190843) Calling .GetSSHKeyPath
	I0116 03:33:36.894575 1017941 main.go:141] libmachine: (newest-cni-190843) Calling .GetSSHUsername
	I0116 03:33:36.894783 1017941 main.go:141] libmachine: Using SSH client type: native
	I0116 03:33:36.895178 1017941 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0116 03:33:36.895194 1017941 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-190843 && echo "newest-cni-190843" | sudo tee /etc/hostname
	I0116 03:33:37.024977 1017941 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-190843
	
	I0116 03:33:37.025012 1017941 main.go:141] libmachine: (newest-cni-190843) Calling .GetSSHHostname
	I0116 03:33:37.028065 1017941 main.go:141] libmachine: (newest-cni-190843) DBG | domain newest-cni-190843 has defined MAC address 52:54:00:b0:40:c6 in network mk-newest-cni-190843
	I0116 03:33:37.028458 1017941 main.go:141] libmachine: (newest-cni-190843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:40:c6", ip: ""} in network mk-newest-cni-190843: {Iface:virbr3 ExpiryTime:2024-01-16 04:33:30 +0000 UTC Type:0 Mac:52:54:00:b0:40:c6 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:newest-cni-190843 Clientid:01:52:54:00:b0:40:c6}
	I0116 03:33:37.028489 1017941 main.go:141] libmachine: (newest-cni-190843) DBG | domain newest-cni-190843 has defined IP address 192.168.39.3 and MAC address 52:54:00:b0:40:c6 in network mk-newest-cni-190843
	I0116 03:33:37.028651 1017941 main.go:141] libmachine: (newest-cni-190843) Calling .GetSSHPort
	I0116 03:33:37.028871 1017941 main.go:141] libmachine: (newest-cni-190843) Calling .GetSSHKeyPath
	I0116 03:33:37.029032 1017941 main.go:141] libmachine: (newest-cni-190843) Calling .GetSSHKeyPath
	I0116 03:33:37.029150 1017941 main.go:141] libmachine: (newest-cni-190843) Calling .GetSSHUsername
	I0116 03:33:37.029317 1017941 main.go:141] libmachine: Using SSH client type: native
	I0116 03:33:37.029740 1017941 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0116 03:33:37.029765 1017941 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-190843' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-190843/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-190843' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0116 03:33:37.151022 1017941 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0116 03:33:37.151066 1017941 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17967-971255/.minikube CaCertPath:/home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17967-971255/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17967-971255/.minikube}
	I0116 03:33:37.151129 1017941 buildroot.go:174] setting up certificates
	I0116 03:33:37.151148 1017941 provision.go:83] configureAuth start
	I0116 03:33:37.151181 1017941 main.go:141] libmachine: (newest-cni-190843) Calling .GetMachineName
	I0116 03:33:37.151551 1017941 main.go:141] libmachine: (newest-cni-190843) Calling .GetIP
	I0116 03:33:37.154753 1017941 main.go:141] libmachine: (newest-cni-190843) DBG | domain newest-cni-190843 has defined MAC address 52:54:00:b0:40:c6 in network mk-newest-cni-190843
	I0116 03:33:37.155161 1017941 main.go:141] libmachine: (newest-cni-190843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:40:c6", ip: ""} in network mk-newest-cni-190843: {Iface:virbr3 ExpiryTime:2024-01-16 04:33:30 +0000 UTC Type:0 Mac:52:54:00:b0:40:c6 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:newest-cni-190843 Clientid:01:52:54:00:b0:40:c6}
	I0116 03:33:37.155192 1017941 main.go:141] libmachine: (newest-cni-190843) DBG | domain newest-cni-190843 has defined IP address 192.168.39.3 and MAC address 52:54:00:b0:40:c6 in network mk-newest-cni-190843
	I0116 03:33:37.155402 1017941 main.go:141] libmachine: (newest-cni-190843) Calling .GetSSHHostname
	I0116 03:33:37.158280 1017941 main.go:141] libmachine: (newest-cni-190843) DBG | domain newest-cni-190843 has defined MAC address 52:54:00:b0:40:c6 in network mk-newest-cni-190843
	I0116 03:33:37.158694 1017941 main.go:141] libmachine: (newest-cni-190843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:40:c6", ip: ""} in network mk-newest-cni-190843: {Iface:virbr3 ExpiryTime:2024-01-16 04:33:30 +0000 UTC Type:0 Mac:52:54:00:b0:40:c6 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:newest-cni-190843 Clientid:01:52:54:00:b0:40:c6}
	I0116 03:33:37.158724 1017941 main.go:141] libmachine: (newest-cni-190843) DBG | domain newest-cni-190843 has defined IP address 192.168.39.3 and MAC address 52:54:00:b0:40:c6 in network mk-newest-cni-190843
	I0116 03:33:37.158857 1017941 provision.go:138] copyHostCerts
	I0116 03:33:37.158954 1017941 exec_runner.go:144] found /home/jenkins/minikube-integration/17967-971255/.minikube/ca.pem, removing ...
	I0116 03:33:37.158973 1017941 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17967-971255/.minikube/ca.pem
	I0116 03:33:37.159682 1017941 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17967-971255/.minikube/ca.pem (1082 bytes)
	I0116 03:33:37.159805 1017941 exec_runner.go:144] found /home/jenkins/minikube-integration/17967-971255/.minikube/cert.pem, removing ...
	I0116 03:33:37.159816 1017941 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17967-971255/.minikube/cert.pem
	I0116 03:33:37.159859 1017941 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17967-971255/.minikube/cert.pem (1123 bytes)
	I0116 03:33:37.159930 1017941 exec_runner.go:144] found /home/jenkins/minikube-integration/17967-971255/.minikube/key.pem, removing ...
	I0116 03:33:37.159940 1017941 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17967-971255/.minikube/key.pem
	I0116 03:33:37.159975 1017941 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17967-971255/.minikube/key.pem (1675 bytes)
	I0116 03:33:37.160041 1017941 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17967-971255/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca-key.pem org=jenkins.newest-cni-190843 san=[192.168.39.3 192.168.39.3 localhost 127.0.0.1 minikube newest-cni-190843]
	I0116 03:33:37.240579 1017941 provision.go:172] copyRemoteCerts
	I0116 03:33:37.240670 1017941 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0116 03:33:37.240723 1017941 main.go:141] libmachine: (newest-cni-190843) Calling .GetSSHHostname
	I0116 03:33:37.243855 1017941 main.go:141] libmachine: (newest-cni-190843) DBG | domain newest-cni-190843 has defined MAC address 52:54:00:b0:40:c6 in network mk-newest-cni-190843
	I0116 03:33:37.244289 1017941 main.go:141] libmachine: (newest-cni-190843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:40:c6", ip: ""} in network mk-newest-cni-190843: {Iface:virbr3 ExpiryTime:2024-01-16 04:33:30 +0000 UTC Type:0 Mac:52:54:00:b0:40:c6 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:newest-cni-190843 Clientid:01:52:54:00:b0:40:c6}
	I0116 03:33:37.244326 1017941 main.go:141] libmachine: (newest-cni-190843) DBG | domain newest-cni-190843 has defined IP address 192.168.39.3 and MAC address 52:54:00:b0:40:c6 in network mk-newest-cni-190843
	I0116 03:33:37.244512 1017941 main.go:141] libmachine: (newest-cni-190843) Calling .GetSSHPort
	I0116 03:33:37.244728 1017941 main.go:141] libmachine: (newest-cni-190843) Calling .GetSSHKeyPath
	I0116 03:33:37.244959 1017941 main.go:141] libmachine: (newest-cni-190843) Calling .GetSSHUsername
	I0116 03:33:37.245138 1017941 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/newest-cni-190843/id_rsa Username:docker}
	I0116 03:33:37.332424 1017941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0116 03:33:37.357332 1017941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0116 03:33:37.380263 1017941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0116 03:33:37.404478 1017941 provision.go:86] duration metric: configureAuth took 253.309555ms
	I0116 03:33:37.404510 1017941 buildroot.go:189] setting minikube options for container-runtime
	I0116 03:33:37.404764 1017941 config.go:182] Loaded profile config "newest-cni-190843": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0116 03:33:37.404883 1017941 main.go:141] libmachine: (newest-cni-190843) Calling .GetSSHHostname
	I0116 03:33:37.407892 1017941 main.go:141] libmachine: (newest-cni-190843) DBG | domain newest-cni-190843 has defined MAC address 52:54:00:b0:40:c6 in network mk-newest-cni-190843
	I0116 03:33:37.408349 1017941 main.go:141] libmachine: (newest-cni-190843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:40:c6", ip: ""} in network mk-newest-cni-190843: {Iface:virbr3 ExpiryTime:2024-01-16 04:33:30 +0000 UTC Type:0 Mac:52:54:00:b0:40:c6 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:newest-cni-190843 Clientid:01:52:54:00:b0:40:c6}
	I0116 03:33:37.408392 1017941 main.go:141] libmachine: (newest-cni-190843) DBG | domain newest-cni-190843 has defined IP address 192.168.39.3 and MAC address 52:54:00:b0:40:c6 in network mk-newest-cni-190843
	I0116 03:33:37.408551 1017941 main.go:141] libmachine: (newest-cni-190843) Calling .GetSSHPort
	I0116 03:33:37.408810 1017941 main.go:141] libmachine: (newest-cni-190843) Calling .GetSSHKeyPath
	I0116 03:33:37.409028 1017941 main.go:141] libmachine: (newest-cni-190843) Calling .GetSSHKeyPath
	I0116 03:33:37.409220 1017941 main.go:141] libmachine: (newest-cni-190843) Calling .GetSSHUsername
	I0116 03:33:37.409410 1017941 main.go:141] libmachine: Using SSH client type: native
	I0116 03:33:37.409749 1017941 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0116 03:33:37.409770 1017941 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0116 03:33:37.753698 1017941 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0116 03:33:37.753731 1017941 machine.go:91] provisioned docker machine in 863.331407ms
	I0116 03:33:37.753744 1017941 start.go:300] post-start starting for "newest-cni-190843" (driver="kvm2")
	I0116 03:33:37.753764 1017941 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0116 03:33:37.753792 1017941 main.go:141] libmachine: (newest-cni-190843) Calling .DriverName
	I0116 03:33:37.754257 1017941 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0116 03:33:37.754300 1017941 main.go:141] libmachine: (newest-cni-190843) Calling .GetSSHHostname
	I0116 03:33:37.757244 1017941 main.go:141] libmachine: (newest-cni-190843) DBG | domain newest-cni-190843 has defined MAC address 52:54:00:b0:40:c6 in network mk-newest-cni-190843
	I0116 03:33:37.757727 1017941 main.go:141] libmachine: (newest-cni-190843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:40:c6", ip: ""} in network mk-newest-cni-190843: {Iface:virbr3 ExpiryTime:2024-01-16 04:33:30 +0000 UTC Type:0 Mac:52:54:00:b0:40:c6 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:newest-cni-190843 Clientid:01:52:54:00:b0:40:c6}
	I0116 03:33:37.757759 1017941 main.go:141] libmachine: (newest-cni-190843) DBG | domain newest-cni-190843 has defined IP address 192.168.39.3 and MAC address 52:54:00:b0:40:c6 in network mk-newest-cni-190843
	I0116 03:33:37.757932 1017941 main.go:141] libmachine: (newest-cni-190843) Calling .GetSSHPort
	I0116 03:33:37.758134 1017941 main.go:141] libmachine: (newest-cni-190843) Calling .GetSSHKeyPath
	I0116 03:33:37.758316 1017941 main.go:141] libmachine: (newest-cni-190843) Calling .GetSSHUsername
	I0116 03:33:37.758496 1017941 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/newest-cni-190843/id_rsa Username:docker}
	I0116 03:33:37.849090 1017941 ssh_runner.go:195] Run: cat /etc/os-release
	I0116 03:33:37.853845 1017941 info.go:137] Remote host: Buildroot 2021.02.12
	I0116 03:33:37.853877 1017941 filesync.go:126] Scanning /home/jenkins/minikube-integration/17967-971255/.minikube/addons for local assets ...
	I0116 03:33:37.853943 1017941 filesync.go:126] Scanning /home/jenkins/minikube-integration/17967-971255/.minikube/files for local assets ...
	I0116 03:33:37.854024 1017941 filesync.go:149] local asset: /home/jenkins/minikube-integration/17967-971255/.minikube/files/etc/ssl/certs/9784822.pem -> 9784822.pem in /etc/ssl/certs
	I0116 03:33:37.854138 1017941 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0116 03:33:37.863759 1017941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/files/etc/ssl/certs/9784822.pem --> /etc/ssl/certs/9784822.pem (1708 bytes)
	I0116 03:33:37.886691 1017941 start.go:303] post-start completed in 132.925055ms
	I0116 03:33:37.886729 1017941 fix.go:56] fixHost completed within 21.249909927s
	I0116 03:33:37.886755 1017941 main.go:141] libmachine: (newest-cni-190843) Calling .GetSSHHostname
	I0116 03:33:37.889770 1017941 main.go:141] libmachine: (newest-cni-190843) DBG | domain newest-cni-190843 has defined MAC address 52:54:00:b0:40:c6 in network mk-newest-cni-190843
	I0116 03:33:37.890169 1017941 main.go:141] libmachine: (newest-cni-190843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:40:c6", ip: ""} in network mk-newest-cni-190843: {Iface:virbr3 ExpiryTime:2024-01-16 04:33:30 +0000 UTC Type:0 Mac:52:54:00:b0:40:c6 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:newest-cni-190843 Clientid:01:52:54:00:b0:40:c6}
	I0116 03:33:37.890205 1017941 main.go:141] libmachine: (newest-cni-190843) DBG | domain newest-cni-190843 has defined IP address 192.168.39.3 and MAC address 52:54:00:b0:40:c6 in network mk-newest-cni-190843
	I0116 03:33:37.890337 1017941 main.go:141] libmachine: (newest-cni-190843) Calling .GetSSHPort
	I0116 03:33:37.890554 1017941 main.go:141] libmachine: (newest-cni-190843) Calling .GetSSHKeyPath
	I0116 03:33:37.890704 1017941 main.go:141] libmachine: (newest-cni-190843) Calling .GetSSHKeyPath
	I0116 03:33:37.890879 1017941 main.go:141] libmachine: (newest-cni-190843) Calling .GetSSHUsername
	I0116 03:33:37.891096 1017941 main.go:141] libmachine: Using SSH client type: native
	I0116 03:33:37.891473 1017941 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0116 03:33:37.891493 1017941 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0116 03:33:38.008494 1017941 main.go:141] libmachine: SSH cmd err, output: <nil>: 1705376017.950874940
	
	I0116 03:33:38.008527 1017941 fix.go:206] guest clock: 1705376017.950874940
	I0116 03:33:38.008539 1017941 fix.go:219] Guest: 2024-01-16 03:33:37.95087494 +0000 UTC Remote: 2024-01-16 03:33:37.886734014 +0000 UTC m=+21.423852815 (delta=64.140926ms)
	I0116 03:33:38.008608 1017941 fix.go:190] guest clock delta is within tolerance: 64.140926ms
	I0116 03:33:38.008619 1017941 start.go:83] releasing machines lock for "newest-cni-190843", held for 21.37181427s
	I0116 03:33:38.008656 1017941 main.go:141] libmachine: (newest-cni-190843) Calling .DriverName
	I0116 03:33:38.008996 1017941 main.go:141] libmachine: (newest-cni-190843) Calling .GetIP
	I0116 03:33:38.012121 1017941 main.go:141] libmachine: (newest-cni-190843) DBG | domain newest-cni-190843 has defined MAC address 52:54:00:b0:40:c6 in network mk-newest-cni-190843
	I0116 03:33:38.012553 1017941 main.go:141] libmachine: (newest-cni-190843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:40:c6", ip: ""} in network mk-newest-cni-190843: {Iface:virbr3 ExpiryTime:2024-01-16 04:33:30 +0000 UTC Type:0 Mac:52:54:00:b0:40:c6 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:newest-cni-190843 Clientid:01:52:54:00:b0:40:c6}
	I0116 03:33:38.012586 1017941 main.go:141] libmachine: (newest-cni-190843) DBG | domain newest-cni-190843 has defined IP address 192.168.39.3 and MAC address 52:54:00:b0:40:c6 in network mk-newest-cni-190843
	I0116 03:33:38.012764 1017941 main.go:141] libmachine: (newest-cni-190843) Calling .DriverName
	I0116 03:33:38.013601 1017941 main.go:141] libmachine: (newest-cni-190843) Calling .DriverName
	I0116 03:33:38.013859 1017941 main.go:141] libmachine: (newest-cni-190843) Calling .DriverName
	I0116 03:33:38.014028 1017941 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0116 03:33:38.014082 1017941 main.go:141] libmachine: (newest-cni-190843) Calling .GetSSHHostname
	I0116 03:33:38.014125 1017941 ssh_runner.go:195] Run: cat /version.json
	I0116 03:33:38.014155 1017941 main.go:141] libmachine: (newest-cni-190843) Calling .GetSSHHostname
	I0116 03:33:38.017233 1017941 main.go:141] libmachine: (newest-cni-190843) DBG | domain newest-cni-190843 has defined MAC address 52:54:00:b0:40:c6 in network mk-newest-cni-190843
	I0116 03:33:38.017503 1017941 main.go:141] libmachine: (newest-cni-190843) DBG | domain newest-cni-190843 has defined MAC address 52:54:00:b0:40:c6 in network mk-newest-cni-190843
	I0116 03:33:38.017721 1017941 main.go:141] libmachine: (newest-cni-190843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:40:c6", ip: ""} in network mk-newest-cni-190843: {Iface:virbr3 ExpiryTime:2024-01-16 04:33:30 +0000 UTC Type:0 Mac:52:54:00:b0:40:c6 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:newest-cni-190843 Clientid:01:52:54:00:b0:40:c6}
	I0116 03:33:38.017757 1017941 main.go:141] libmachine: (newest-cni-190843) DBG | domain newest-cni-190843 has defined IP address 192.168.39.3 and MAC address 52:54:00:b0:40:c6 in network mk-newest-cni-190843
	I0116 03:33:38.017866 1017941 main.go:141] libmachine: (newest-cni-190843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:40:c6", ip: ""} in network mk-newest-cni-190843: {Iface:virbr3 ExpiryTime:2024-01-16 04:33:30 +0000 UTC Type:0 Mac:52:54:00:b0:40:c6 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:newest-cni-190843 Clientid:01:52:54:00:b0:40:c6}
	I0116 03:33:38.017911 1017941 main.go:141] libmachine: (newest-cni-190843) DBG | domain newest-cni-190843 has defined IP address 192.168.39.3 and MAC address 52:54:00:b0:40:c6 in network mk-newest-cni-190843
	I0116 03:33:38.017960 1017941 main.go:141] libmachine: (newest-cni-190843) Calling .GetSSHPort
	I0116 03:33:38.018060 1017941 main.go:141] libmachine: (newest-cni-190843) Calling .GetSSHPort
	I0116 03:33:38.018175 1017941 main.go:141] libmachine: (newest-cni-190843) Calling .GetSSHKeyPath
	I0116 03:33:38.018269 1017941 main.go:141] libmachine: (newest-cni-190843) Calling .GetSSHKeyPath
	I0116 03:33:38.018381 1017941 main.go:141] libmachine: (newest-cni-190843) Calling .GetSSHUsername
	I0116 03:33:38.018464 1017941 main.go:141] libmachine: (newest-cni-190843) Calling .GetSSHUsername
	I0116 03:33:38.018602 1017941 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/newest-cni-190843/id_rsa Username:docker}
	I0116 03:33:38.018636 1017941 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/newest-cni-190843/id_rsa Username:docker}
	I0116 03:33:38.104028 1017941 ssh_runner.go:195] Run: systemctl --version
	I0116 03:33:38.134141 1017941 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0116 03:33:38.291204 1017941 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0116 03:33:38.297792 1017941 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0116 03:33:38.297899 1017941 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0116 03:33:38.313975 1017941 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0116 03:33:38.314009 1017941 start.go:475] detecting cgroup driver to use...
	I0116 03:33:38.314105 1017941 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0116 03:33:38.333152 1017941 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0116 03:33:38.346760 1017941 docker.go:217] disabling cri-docker service (if available) ...
	I0116 03:33:38.346849 1017941 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0116 03:33:38.360527 1017941 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0116 03:33:38.374569 1017941 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0116 03:33:38.503527 1017941 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0116 03:33:38.648963 1017941 docker.go:233] disabling docker service ...
	I0116 03:33:38.649077 1017941 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0116 03:33:38.665130 1017941 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0116 03:33:38.679182 1017941 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0116 03:33:38.810321 1017941 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0116 03:33:38.958604 1017941 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0116 03:33:38.976286 1017941 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0116 03:33:38.995673 1017941 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0116 03:33:38.995748 1017941 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:33:39.005665 1017941 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0116 03:33:39.005752 1017941 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:33:39.018634 1017941 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:33:39.030680 1017941 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:33:39.040670 1017941 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0116 03:33:39.050981 1017941 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0116 03:33:39.060218 1017941 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0116 03:33:39.060279 1017941 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0116 03:33:39.074741 1017941 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0116 03:33:39.083815 1017941 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0116 03:33:39.202023 1017941 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0116 03:33:39.389697 1017941 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0116 03:33:39.389771 1017941 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0116 03:33:39.394868 1017941 start.go:543] Will wait 60s for crictl version
	I0116 03:33:39.394930 1017941 ssh_runner.go:195] Run: which crictl
	I0116 03:33:39.398910 1017941 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0116 03:33:39.453748 1017941 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0116 03:33:39.453932 1017941 ssh_runner.go:195] Run: crio --version
	I0116 03:33:39.497139 1017941 ssh_runner.go:195] Run: crio --version
	I0116 03:33:39.559132 1017941 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.24.1 ...
	I0116 03:33:39.560384 1017941 main.go:141] libmachine: (newest-cni-190843) Calling .GetIP
	I0116 03:33:39.563743 1017941 main.go:141] libmachine: (newest-cni-190843) DBG | domain newest-cni-190843 has defined MAC address 52:54:00:b0:40:c6 in network mk-newest-cni-190843
	I0116 03:33:39.564215 1017941 main.go:141] libmachine: (newest-cni-190843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:40:c6", ip: ""} in network mk-newest-cni-190843: {Iface:virbr3 ExpiryTime:2024-01-16 04:33:30 +0000 UTC Type:0 Mac:52:54:00:b0:40:c6 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:newest-cni-190843 Clientid:01:52:54:00:b0:40:c6}
	I0116 03:33:39.564247 1017941 main.go:141] libmachine: (newest-cni-190843) DBG | domain newest-cni-190843 has defined IP address 192.168.39.3 and MAC address 52:54:00:b0:40:c6 in network mk-newest-cni-190843
	I0116 03:33:39.564557 1017941 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0116 03:33:39.569090 1017941 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 03:33:39.584033 1017941 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0116 03:33:38.011598 1018250 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0116 03:33:38.011819 1018250 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:33:38.011892 1018250 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:33:38.030059 1018250 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37257
	I0116 03:33:38.030605 1018250 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:33:38.031315 1018250 main.go:141] libmachine: Using API Version  1
	I0116 03:33:38.031342 1018250 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:33:38.031678 1018250 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:33:38.031913 1018250 main.go:141] libmachine: (kindnet-278325) Calling .GetMachineName
	I0116 03:33:38.032111 1018250 main.go:141] libmachine: (kindnet-278325) Calling .DriverName
	I0116 03:33:38.032270 1018250 start.go:159] libmachine.API.Create for "kindnet-278325" (driver="kvm2")
	I0116 03:33:38.032306 1018250 client.go:168] LocalClient.Create starting
	I0116 03:33:38.032349 1018250 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca.pem
	I0116 03:33:38.032400 1018250 main.go:141] libmachine: Decoding PEM data...
	I0116 03:33:38.032427 1018250 main.go:141] libmachine: Parsing certificate...
	I0116 03:33:38.032504 1018250 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17967-971255/.minikube/certs/cert.pem
	I0116 03:33:38.032533 1018250 main.go:141] libmachine: Decoding PEM data...
	I0116 03:33:38.032553 1018250 main.go:141] libmachine: Parsing certificate...
	I0116 03:33:38.032586 1018250 main.go:141] libmachine: Running pre-create checks...
	I0116 03:33:38.032603 1018250 main.go:141] libmachine: (kindnet-278325) Calling .PreCreateCheck
	I0116 03:33:38.033003 1018250 main.go:141] libmachine: (kindnet-278325) Calling .GetConfigRaw
	I0116 03:33:38.033504 1018250 main.go:141] libmachine: Creating machine...
	I0116 03:33:38.033519 1018250 main.go:141] libmachine: (kindnet-278325) Calling .Create
	I0116 03:33:38.033632 1018250 main.go:141] libmachine: (kindnet-278325) Creating KVM machine...
	I0116 03:33:38.035054 1018250 main.go:141] libmachine: (kindnet-278325) DBG | found existing default KVM network
	I0116 03:33:38.036469 1018250 main.go:141] libmachine: (kindnet-278325) DBG | I0116 03:33:38.036270 1018308 network.go:214] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:09:a8:91} reservation:<nil>}
	I0116 03:33:38.037833 1018250 main.go:141] libmachine: (kindnet-278325) DBG | I0116 03:33:38.037696 1018308 network.go:214] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:d1:43:ba} reservation:<nil>}
	I0116 03:33:38.039165 1018250 main.go:141] libmachine: (kindnet-278325) DBG | I0116 03:33:38.039065 1018308 network.go:209] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00030c9e0}
	I0116 03:33:38.045692 1018250 main.go:141] libmachine: (kindnet-278325) DBG | trying to create private KVM network mk-kindnet-278325 192.168.61.0/24...
	I0116 03:33:38.131431 1018250 main.go:141] libmachine: (kindnet-278325) DBG | private KVM network mk-kindnet-278325 192.168.61.0/24 created
	I0116 03:33:38.131492 1018250 main.go:141] libmachine: (kindnet-278325) DBG | I0116 03:33:38.131376 1018308 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17967-971255/.minikube
	I0116 03:33:38.131511 1018250 main.go:141] libmachine: (kindnet-278325) Setting up store path in /home/jenkins/minikube-integration/17967-971255/.minikube/machines/kindnet-278325 ...
	I0116 03:33:38.131530 1018250 main.go:141] libmachine: (kindnet-278325) Building disk image from file:///home/jenkins/minikube-integration/17967-971255/.minikube/cache/iso/amd64/minikube-v1.32.1-1703784139-17866-amd64.iso
	I0116 03:33:38.131562 1018250 main.go:141] libmachine: (kindnet-278325) Downloading /home/jenkins/minikube-integration/17967-971255/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17967-971255/.minikube/cache/iso/amd64/minikube-v1.32.1-1703784139-17866-amd64.iso...
	I0116 03:33:38.391946 1018250 main.go:141] libmachine: (kindnet-278325) DBG | I0116 03:33:38.391811 1018308 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17967-971255/.minikube/machines/kindnet-278325/id_rsa...
	I0116 03:33:38.542319 1018250 main.go:141] libmachine: (kindnet-278325) DBG | I0116 03:33:38.542134 1018308 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17967-971255/.minikube/machines/kindnet-278325/kindnet-278325.rawdisk...
	I0116 03:33:38.542360 1018250 main.go:141] libmachine: (kindnet-278325) DBG | Writing magic tar header
	I0116 03:33:38.542381 1018250 main.go:141] libmachine: (kindnet-278325) DBG | Writing SSH key tar header
	I0116 03:33:38.542401 1018250 main.go:141] libmachine: (kindnet-278325) DBG | I0116 03:33:38.542348 1018308 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17967-971255/.minikube/machines/kindnet-278325 ...
	I0116 03:33:38.542569 1018250 main.go:141] libmachine: (kindnet-278325) Setting executable bit set on /home/jenkins/minikube-integration/17967-971255/.minikube/machines/kindnet-278325 (perms=drwx------)
	I0116 03:33:38.542628 1018250 main.go:141] libmachine: (kindnet-278325) Setting executable bit set on /home/jenkins/minikube-integration/17967-971255/.minikube/machines (perms=drwxr-xr-x)
	I0116 03:33:38.542642 1018250 main.go:141] libmachine: (kindnet-278325) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17967-971255/.minikube/machines/kindnet-278325
	I0116 03:33:38.542671 1018250 main.go:141] libmachine: (kindnet-278325) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17967-971255/.minikube/machines
	I0116 03:33:38.542690 1018250 main.go:141] libmachine: (kindnet-278325) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17967-971255/.minikube
	I0116 03:33:38.542706 1018250 main.go:141] libmachine: (kindnet-278325) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17967-971255
	I0116 03:33:38.542720 1018250 main.go:141] libmachine: (kindnet-278325) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0116 03:33:38.542734 1018250 main.go:141] libmachine: (kindnet-278325) DBG | Checking permissions on dir: /home/jenkins
	I0116 03:33:38.542747 1018250 main.go:141] libmachine: (kindnet-278325) DBG | Checking permissions on dir: /home
	I0116 03:33:38.542760 1018250 main.go:141] libmachine: (kindnet-278325) DBG | Skipping /home - not owner
	I0116 03:33:38.542777 1018250 main.go:141] libmachine: (kindnet-278325) Setting executable bit set on /home/jenkins/minikube-integration/17967-971255/.minikube (perms=drwxr-xr-x)
	I0116 03:33:38.542791 1018250 main.go:141] libmachine: (kindnet-278325) Setting executable bit set on /home/jenkins/minikube-integration/17967-971255 (perms=drwxrwxr-x)
	I0116 03:33:38.542804 1018250 main.go:141] libmachine: (kindnet-278325) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0116 03:33:38.542816 1018250 main.go:141] libmachine: (kindnet-278325) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0116 03:33:38.542836 1018250 main.go:141] libmachine: (kindnet-278325) Creating domain...
	I0116 03:33:38.544009 1018250 main.go:141] libmachine: (kindnet-278325) define libvirt domain using xml: 
	I0116 03:33:38.544049 1018250 main.go:141] libmachine: (kindnet-278325) <domain type='kvm'>
	I0116 03:33:38.544064 1018250 main.go:141] libmachine: (kindnet-278325)   <name>kindnet-278325</name>
	I0116 03:33:38.544081 1018250 main.go:141] libmachine: (kindnet-278325)   <memory unit='MiB'>3072</memory>
	I0116 03:33:38.544095 1018250 main.go:141] libmachine: (kindnet-278325)   <vcpu>2</vcpu>
	I0116 03:33:38.544106 1018250 main.go:141] libmachine: (kindnet-278325)   <features>
	I0116 03:33:38.544137 1018250 main.go:141] libmachine: (kindnet-278325)     <acpi/>
	I0116 03:33:38.544156 1018250 main.go:141] libmachine: (kindnet-278325)     <apic/>
	I0116 03:33:38.544165 1018250 main.go:141] libmachine: (kindnet-278325)     <pae/>
	I0116 03:33:38.544177 1018250 main.go:141] libmachine: (kindnet-278325)     
	I0116 03:33:38.544189 1018250 main.go:141] libmachine: (kindnet-278325)   </features>
	I0116 03:33:38.544204 1018250 main.go:141] libmachine: (kindnet-278325)   <cpu mode='host-passthrough'>
	I0116 03:33:38.544216 1018250 main.go:141] libmachine: (kindnet-278325)   
	I0116 03:33:38.544227 1018250 main.go:141] libmachine: (kindnet-278325)   </cpu>
	I0116 03:33:38.544239 1018250 main.go:141] libmachine: (kindnet-278325)   <os>
	I0116 03:33:38.544250 1018250 main.go:141] libmachine: (kindnet-278325)     <type>hvm</type>
	I0116 03:33:38.544262 1018250 main.go:141] libmachine: (kindnet-278325)     <boot dev='cdrom'/>
	I0116 03:33:38.544274 1018250 main.go:141] libmachine: (kindnet-278325)     <boot dev='hd'/>
	I0116 03:33:38.544287 1018250 main.go:141] libmachine: (kindnet-278325)     <bootmenu enable='no'/>
	I0116 03:33:38.544302 1018250 main.go:141] libmachine: (kindnet-278325)   </os>
	I0116 03:33:38.544312 1018250 main.go:141] libmachine: (kindnet-278325)   <devices>
	I0116 03:33:38.544323 1018250 main.go:141] libmachine: (kindnet-278325)     <disk type='file' device='cdrom'>
	I0116 03:33:38.544339 1018250 main.go:141] libmachine: (kindnet-278325)       <source file='/home/jenkins/minikube-integration/17967-971255/.minikube/machines/kindnet-278325/boot2docker.iso'/>
	I0116 03:33:38.544351 1018250 main.go:141] libmachine: (kindnet-278325)       <target dev='hdc' bus='scsi'/>
	I0116 03:33:38.544369 1018250 main.go:141] libmachine: (kindnet-278325)       <readonly/>
	I0116 03:33:38.544384 1018250 main.go:141] libmachine: (kindnet-278325)     </disk>
	I0116 03:33:38.544404 1018250 main.go:141] libmachine: (kindnet-278325)     <disk type='file' device='disk'>
	I0116 03:33:38.544418 1018250 main.go:141] libmachine: (kindnet-278325)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0116 03:33:38.544435 1018250 main.go:141] libmachine: (kindnet-278325)       <source file='/home/jenkins/minikube-integration/17967-971255/.minikube/machines/kindnet-278325/kindnet-278325.rawdisk'/>
	I0116 03:33:38.544453 1018250 main.go:141] libmachine: (kindnet-278325)       <target dev='hda' bus='virtio'/>
	I0116 03:33:38.544491 1018250 main.go:141] libmachine: (kindnet-278325)     </disk>
	I0116 03:33:38.544514 1018250 main.go:141] libmachine: (kindnet-278325)     <interface type='network'>
	I0116 03:33:38.544531 1018250 main.go:141] libmachine: (kindnet-278325)       <source network='mk-kindnet-278325'/>
	I0116 03:33:38.544544 1018250 main.go:141] libmachine: (kindnet-278325)       <model type='virtio'/>
	I0116 03:33:38.544559 1018250 main.go:141] libmachine: (kindnet-278325)     </interface>
	I0116 03:33:38.544569 1018250 main.go:141] libmachine: (kindnet-278325)     <interface type='network'>
	I0116 03:33:38.544584 1018250 main.go:141] libmachine: (kindnet-278325)       <source network='default'/>
	I0116 03:33:38.544597 1018250 main.go:141] libmachine: (kindnet-278325)       <model type='virtio'/>
	I0116 03:33:38.544612 1018250 main.go:141] libmachine: (kindnet-278325)     </interface>
	I0116 03:33:38.544625 1018250 main.go:141] libmachine: (kindnet-278325)     <serial type='pty'>
	I0116 03:33:38.544639 1018250 main.go:141] libmachine: (kindnet-278325)       <target port='0'/>
	I0116 03:33:38.544656 1018250 main.go:141] libmachine: (kindnet-278325)     </serial>
	I0116 03:33:38.544669 1018250 main.go:141] libmachine: (kindnet-278325)     <console type='pty'>
	I0116 03:33:38.544681 1018250 main.go:141] libmachine: (kindnet-278325)       <target type='serial' port='0'/>
	I0116 03:33:38.544695 1018250 main.go:141] libmachine: (kindnet-278325)     </console>
	I0116 03:33:38.544708 1018250 main.go:141] libmachine: (kindnet-278325)     <rng model='virtio'>
	I0116 03:33:38.544724 1018250 main.go:141] libmachine: (kindnet-278325)       <backend model='random'>/dev/random</backend>
	I0116 03:33:38.544743 1018250 main.go:141] libmachine: (kindnet-278325)     </rng>
	I0116 03:33:38.544757 1018250 main.go:141] libmachine: (kindnet-278325)     
	I0116 03:33:38.544769 1018250 main.go:141] libmachine: (kindnet-278325)     
	I0116 03:33:38.544784 1018250 main.go:141] libmachine: (kindnet-278325)   </devices>
	I0116 03:33:38.544795 1018250 main.go:141] libmachine: (kindnet-278325) </domain>
	I0116 03:33:38.544810 1018250 main.go:141] libmachine: (kindnet-278325) 
	I0116 03:33:38.549887 1018250 main.go:141] libmachine: (kindnet-278325) DBG | domain kindnet-278325 has defined MAC address 52:54:00:41:8f:52 in network default
	I0116 03:33:38.550497 1018250 main.go:141] libmachine: (kindnet-278325) Ensuring networks are active...
	I0116 03:33:38.550529 1018250 main.go:141] libmachine: (kindnet-278325) DBG | domain kindnet-278325 has defined MAC address 52:54:00:6a:b2:f4 in network mk-kindnet-278325
	I0116 03:33:38.551270 1018250 main.go:141] libmachine: (kindnet-278325) Ensuring network default is active
	I0116 03:33:38.551624 1018250 main.go:141] libmachine: (kindnet-278325) Ensuring network mk-kindnet-278325 is active
	I0116 03:33:38.552169 1018250 main.go:141] libmachine: (kindnet-278325) Getting domain xml...
	I0116 03:33:38.553019 1018250 main.go:141] libmachine: (kindnet-278325) Creating domain...
	I0116 03:33:39.585374 1017941 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0116 03:33:39.585447 1017941 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 03:33:39.638813 1017941 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0116 03:33:39.638910 1017941 ssh_runner.go:195] Run: which lz4
	I0116 03:33:39.643474 1017941 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0116 03:33:39.648428 1017941 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0116 03:33:39.648469 1017941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (401853962 bytes)
	I0116 03:33:41.372518 1017941 crio.go:444] Took 1.729088 seconds to copy over tarball
	I0116 03:33:41.372616 1017941 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0116 03:33:37.408107 1017511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:33:37.908158 1017511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:33:38.408915 1017511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:33:38.908690 1017511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:33:39.408082 1017511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:33:39.908711 1017511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:33:40.408035 1017511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:33:40.908563 1017511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:33:41.408900 1017511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:33:41.908096 1017511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:33:42.408321 1017511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:33:42.908731 1017511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:33:43.023678 1017511 kubeadm.go:1088] duration metric: took 12.462736347s to wait for elevateKubeSystemPrivileges.
	I0116 03:33:43.023723 1017511 kubeadm.go:406] StartCluster complete in 27.744296434s
	I0116 03:33:43.023751 1017511 settings.go:142] acquiring lock: {Name:mk39c69fb5454efd5afab021c02c3bec1f1b4e6b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:33:43.023848 1017511 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17967-971255/kubeconfig
	I0116 03:33:43.025117 1017511 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17967-971255/kubeconfig: {Name:mk5ca3018274b0a03599cf3631785c0629f81a67 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:33:43.026185 1017511 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0116 03:33:43.026316 1017511 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0116 03:33:43.026397 1017511 addons.go:69] Setting storage-provisioner=true in profile "auto-278325"
	I0116 03:33:43.026414 1017511 config.go:182] Loaded profile config "auto-278325": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 03:33:43.026448 1017511 addons.go:234] Setting addon storage-provisioner=true in "auto-278325"
	I0116 03:33:43.026491 1017511 addons.go:69] Setting default-storageclass=true in profile "auto-278325"
	I0116 03:33:43.026516 1017511 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "auto-278325"
	I0116 03:33:43.026534 1017511 host.go:66] Checking if "auto-278325" exists ...
	I0116 03:33:43.026909 1017511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:33:43.026939 1017511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:33:43.026998 1017511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:33:43.027049 1017511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:33:43.046119 1017511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41359
	I0116 03:33:43.046560 1017511 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:33:43.047104 1017511 main.go:141] libmachine: Using API Version  1
	I0116 03:33:43.047136 1017511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:33:43.047500 1017511 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:33:43.048146 1017511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:33:43.048189 1017511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:33:43.050774 1017511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41561
	I0116 03:33:43.051926 1017511 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:33:43.053296 1017511 main.go:141] libmachine: Using API Version  1
	I0116 03:33:43.053318 1017511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:33:43.054083 1017511 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:33:43.054363 1017511 main.go:141] libmachine: (auto-278325) Calling .GetState
	I0116 03:33:43.057901 1017511 addons.go:234] Setting addon default-storageclass=true in "auto-278325"
	I0116 03:33:43.057956 1017511 host.go:66] Checking if "auto-278325" exists ...
	I0116 03:33:43.058409 1017511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:33:43.058471 1017511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:33:43.070585 1017511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43521
	I0116 03:33:43.071187 1017511 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:33:43.071639 1017511 main.go:141] libmachine: Using API Version  1
	I0116 03:33:43.071654 1017511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:33:43.071945 1017511 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:33:43.072108 1017511 main.go:141] libmachine: (auto-278325) Calling .GetState
	I0116 03:33:43.073948 1017511 main.go:141] libmachine: (auto-278325) Calling .DriverName
	I0116 03:33:43.083763 1017511 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 03:33:43.081259 1017511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45761
	I0116 03:33:43.086326 1017511 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 03:33:43.086357 1017511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0116 03:33:43.086390 1017511 main.go:141] libmachine: (auto-278325) Calling .GetSSHHostname
	I0116 03:33:43.088029 1017511 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:33:43.088732 1017511 main.go:141] libmachine: Using API Version  1
	I0116 03:33:43.088757 1017511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:33:43.089163 1017511 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:33:43.089768 1017511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:33:43.089869 1017511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:33:43.090128 1017511 main.go:141] libmachine: (auto-278325) DBG | domain auto-278325 has defined MAC address 52:54:00:9c:48:56 in network mk-auto-278325
	I0116 03:33:43.090662 1017511 main.go:141] libmachine: (auto-278325) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:48:56", ip: ""} in network mk-auto-278325: {Iface:virbr2 ExpiryTime:2024-01-16 04:32:54 +0000 UTC Type:0 Mac:52:54:00:9c:48:56 Iaid: IPaddr:192.168.50.113 Prefix:24 Hostname:auto-278325 Clientid:01:52:54:00:9c:48:56}
	I0116 03:33:43.090682 1017511 main.go:141] libmachine: (auto-278325) DBG | domain auto-278325 has defined IP address 192.168.50.113 and MAC address 52:54:00:9c:48:56 in network mk-auto-278325
	I0116 03:33:43.091825 1017511 main.go:141] libmachine: (auto-278325) Calling .GetSSHPort
	I0116 03:33:43.093603 1017511 main.go:141] libmachine: (auto-278325) Calling .GetSSHKeyPath
	I0116 03:33:43.093778 1017511 main.go:141] libmachine: (auto-278325) Calling .GetSSHUsername
	I0116 03:33:43.094007 1017511 sshutil.go:53] new ssh client: &{IP:192.168.50.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/auto-278325/id_rsa Username:docker}
	I0116 03:33:43.111594 1017511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33109
	I0116 03:33:43.112225 1017511 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:33:43.112810 1017511 main.go:141] libmachine: Using API Version  1
	I0116 03:33:43.112828 1017511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:33:43.113119 1017511 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:33:43.113200 1017511 main.go:141] libmachine: (auto-278325) Calling .GetState
	I0116 03:33:43.117965 1017511 main.go:141] libmachine: (auto-278325) Calling .DriverName
	I0116 03:33:43.120112 1017511 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0116 03:33:43.120140 1017511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0116 03:33:43.120175 1017511 main.go:141] libmachine: (auto-278325) Calling .GetSSHHostname
	I0116 03:33:43.124311 1017511 main.go:141] libmachine: (auto-278325) DBG | domain auto-278325 has defined MAC address 52:54:00:9c:48:56 in network mk-auto-278325
	I0116 03:33:43.124871 1017511 main.go:141] libmachine: (auto-278325) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:48:56", ip: ""} in network mk-auto-278325: {Iface:virbr2 ExpiryTime:2024-01-16 04:32:54 +0000 UTC Type:0 Mac:52:54:00:9c:48:56 Iaid: IPaddr:192.168.50.113 Prefix:24 Hostname:auto-278325 Clientid:01:52:54:00:9c:48:56}
	I0116 03:33:43.124897 1017511 main.go:141] libmachine: (auto-278325) DBG | domain auto-278325 has defined IP address 192.168.50.113 and MAC address 52:54:00:9c:48:56 in network mk-auto-278325
	I0116 03:33:43.125283 1017511 main.go:141] libmachine: (auto-278325) Calling .GetSSHPort
	I0116 03:33:43.125713 1017511 main.go:141] libmachine: (auto-278325) Calling .GetSSHKeyPath
	I0116 03:33:43.125947 1017511 main.go:141] libmachine: (auto-278325) Calling .GetSSHUsername
	I0116 03:33:43.126071 1017511 sshutil.go:53] new ssh client: &{IP:192.168.50.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/auto-278325/id_rsa Username:docker}
	I0116 03:33:43.221577 1017511 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0116 03:33:43.262894 1017511 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0116 03:33:43.298078 1017511 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 03:33:43.660903 1017511 kapi.go:248] "coredns" deployment in "kube-system" namespace and "auto-278325" context rescaled to 1 replicas
	I0116 03:33:43.660943 1017511 start.go:223] Will wait 15m0s for node &{Name: IP:192.168.50.113 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0116 03:33:43.663767 1017511 out.go:177] * Verifying Kubernetes components...
	I0116 03:33:40.033184 1018250 main.go:141] libmachine: (kindnet-278325) Waiting to get IP...
	I0116 03:33:40.034510 1018250 main.go:141] libmachine: (kindnet-278325) DBG | domain kindnet-278325 has defined MAC address 52:54:00:6a:b2:f4 in network mk-kindnet-278325
	I0116 03:33:40.035093 1018250 main.go:141] libmachine: (kindnet-278325) DBG | unable to find current IP address of domain kindnet-278325 in network mk-kindnet-278325
	I0116 03:33:40.035122 1018250 main.go:141] libmachine: (kindnet-278325) DBG | I0116 03:33:40.035087 1018308 retry.go:31] will retry after 302.13591ms: waiting for machine to come up
	I0116 03:33:40.338933 1018250 main.go:141] libmachine: (kindnet-278325) DBG | domain kindnet-278325 has defined MAC address 52:54:00:6a:b2:f4 in network mk-kindnet-278325
	I0116 03:33:40.339716 1018250 main.go:141] libmachine: (kindnet-278325) DBG | unable to find current IP address of domain kindnet-278325 in network mk-kindnet-278325
	I0116 03:33:40.339743 1018250 main.go:141] libmachine: (kindnet-278325) DBG | I0116 03:33:40.339616 1018308 retry.go:31] will retry after 339.37737ms: waiting for machine to come up
	I0116 03:33:40.680261 1018250 main.go:141] libmachine: (kindnet-278325) DBG | domain kindnet-278325 has defined MAC address 52:54:00:6a:b2:f4 in network mk-kindnet-278325
	I0116 03:33:40.680783 1018250 main.go:141] libmachine: (kindnet-278325) DBG | unable to find current IP address of domain kindnet-278325 in network mk-kindnet-278325
	I0116 03:33:40.680811 1018250 main.go:141] libmachine: (kindnet-278325) DBG | I0116 03:33:40.680754 1018308 retry.go:31] will retry after 488.241702ms: waiting for machine to come up
	I0116 03:33:41.171191 1018250 main.go:141] libmachine: (kindnet-278325) DBG | domain kindnet-278325 has defined MAC address 52:54:00:6a:b2:f4 in network mk-kindnet-278325
	I0116 03:33:41.171929 1018250 main.go:141] libmachine: (kindnet-278325) DBG | unable to find current IP address of domain kindnet-278325 in network mk-kindnet-278325
	I0116 03:33:41.171954 1018250 main.go:141] libmachine: (kindnet-278325) DBG | I0116 03:33:41.171833 1018308 retry.go:31] will retry after 464.778466ms: waiting for machine to come up
	I0116 03:33:41.638484 1018250 main.go:141] libmachine: (kindnet-278325) DBG | domain kindnet-278325 has defined MAC address 52:54:00:6a:b2:f4 in network mk-kindnet-278325
	I0116 03:33:41.639194 1018250 main.go:141] libmachine: (kindnet-278325) DBG | unable to find current IP address of domain kindnet-278325 in network mk-kindnet-278325
	I0116 03:33:41.639227 1018250 main.go:141] libmachine: (kindnet-278325) DBG | I0116 03:33:41.639123 1018308 retry.go:31] will retry after 578.015455ms: waiting for machine to come up
	I0116 03:33:42.219088 1018250 main.go:141] libmachine: (kindnet-278325) DBG | domain kindnet-278325 has defined MAC address 52:54:00:6a:b2:f4 in network mk-kindnet-278325
	I0116 03:33:42.219666 1018250 main.go:141] libmachine: (kindnet-278325) DBG | unable to find current IP address of domain kindnet-278325 in network mk-kindnet-278325
	I0116 03:33:42.219688 1018250 main.go:141] libmachine: (kindnet-278325) DBG | I0116 03:33:42.219604 1018308 retry.go:31] will retry after 786.901001ms: waiting for machine to come up
	I0116 03:33:43.008087 1018250 main.go:141] libmachine: (kindnet-278325) DBG | domain kindnet-278325 has defined MAC address 52:54:00:6a:b2:f4 in network mk-kindnet-278325
	I0116 03:33:43.008540 1018250 main.go:141] libmachine: (kindnet-278325) DBG | unable to find current IP address of domain kindnet-278325 in network mk-kindnet-278325
	I0116 03:33:43.008586 1018250 main.go:141] libmachine: (kindnet-278325) DBG | I0116 03:33:43.008448 1018308 retry.go:31] will retry after 1.107498774s: waiting for machine to come up
	I0116 03:33:44.117223 1018250 main.go:141] libmachine: (kindnet-278325) DBG | domain kindnet-278325 has defined MAC address 52:54:00:6a:b2:f4 in network mk-kindnet-278325
	I0116 03:33:44.118490 1018250 main.go:141] libmachine: (kindnet-278325) DBG | unable to find current IP address of domain kindnet-278325 in network mk-kindnet-278325
	I0116 03:33:44.118517 1018250 main.go:141] libmachine: (kindnet-278325) DBG | I0116 03:33:44.118365 1018308 retry.go:31] will retry after 1.234584052s: waiting for machine to come up
	I0116 03:33:44.719429 1017941 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.346768421s)
	I0116 03:33:44.719460 1017941 crio.go:451] Took 3.346905 seconds to extract the tarball
	I0116 03:33:44.719472 1017941 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0116 03:33:44.772398 1017941 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 03:33:44.824969 1017941 crio.go:496] all images are preloaded for cri-o runtime.
	I0116 03:33:44.825002 1017941 cache_images.go:84] Images are preloaded, skipping loading
	I0116 03:33:44.825108 1017941 ssh_runner.go:195] Run: crio config
	I0116 03:33:44.897932 1017941 cni.go:84] Creating CNI manager for ""
	I0116 03:33:44.897966 1017941 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 03:33:44.897992 1017941 kubeadm.go:87] Using pod CIDR: 10.42.0.0/16
	I0116 03:33:44.898022 1017941 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.39.3 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-190843 NodeName:newest-cni-190843 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.3"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArgs:m
ap[] NodeIP:192.168.39.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0116 03:33:44.898212 1017941 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-190843"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.3"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0116 03:33:44.898311 1017941 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --feature-gates=ServerSideApply=true --hostname-override=newest-cni-190843 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-190843 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0116 03:33:44.898380 1017941 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0116 03:33:44.910957 1017941 binaries.go:44] Found k8s binaries, skipping transfer
	I0116 03:33:44.911037 1017941 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0116 03:33:44.923911 1017941 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (417 bytes)
	I0116 03:33:44.947449 1017941 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0116 03:33:44.970287 1017941 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2227 bytes)
	I0116 03:33:44.994388 1017941 ssh_runner.go:195] Run: grep 192.168.39.3	control-plane.minikube.internal$ /etc/hosts
	I0116 03:33:45.002773 1017941 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.3	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 03:33:45.018995 1017941 certs.go:56] Setting up /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/newest-cni-190843 for IP: 192.168.39.3
	I0116 03:33:45.019047 1017941 certs.go:190] acquiring lock for shared ca certs: {Name:mk7c5920652340a030ac1e36e0fe21cc8437c491 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:33:45.019244 1017941 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17967-971255/.minikube/ca.key
	I0116 03:33:45.019344 1017941 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17967-971255/.minikube/proxy-client-ca.key
	I0116 03:33:45.019459 1017941 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/newest-cni-190843/client.key
	I0116 03:33:45.019545 1017941 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/newest-cni-190843/apiserver.key.599d509e
	I0116 03:33:45.019611 1017941 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/newest-cni-190843/proxy-client.key
	I0116 03:33:45.019765 1017941 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/home/jenkins/minikube-integration/17967-971255/.minikube/certs/978482.pem (1338 bytes)
	W0116 03:33:45.019816 1017941 certs.go:433] ignoring /home/jenkins/minikube-integration/17967-971255/.minikube/certs/home/jenkins/minikube-integration/17967-971255/.minikube/certs/978482_empty.pem, impossibly tiny 0 bytes
	I0116 03:33:45.019835 1017941 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca-key.pem (1675 bytes)
	I0116 03:33:45.019873 1017941 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/home/jenkins/minikube-integration/17967-971255/.minikube/certs/ca.pem (1082 bytes)
	I0116 03:33:45.019911 1017941 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/home/jenkins/minikube-integration/17967-971255/.minikube/certs/cert.pem (1123 bytes)
	I0116 03:33:45.019939 1017941 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-971255/.minikube/certs/home/jenkins/minikube-integration/17967-971255/.minikube/certs/key.pem (1675 bytes)
	I0116 03:33:45.020003 1017941 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-971255/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17967-971255/.minikube/files/etc/ssl/certs/9784822.pem (1708 bytes)
	I0116 03:33:45.020880 1017941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/newest-cni-190843/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0116 03:33:45.051488 1017941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/newest-cni-190843/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0116 03:33:45.079482 1017941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/newest-cni-190843/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0116 03:33:45.110953 1017941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/newest-cni-190843/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0116 03:33:45.143561 1017941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0116 03:33:45.170291 1017941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0116 03:33:45.200949 1017941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0116 03:33:45.231849 1017941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0116 03:33:45.261758 1017941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/certs/978482.pem --> /usr/share/ca-certificates/978482.pem (1338 bytes)
	I0116 03:33:45.298447 1017941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/files/etc/ssl/certs/9784822.pem --> /usr/share/ca-certificates/9784822.pem (1708 bytes)
	I0116 03:33:45.327834 1017941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-971255/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0116 03:33:45.358418 1017941 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0116 03:33:45.380331 1017941 ssh_runner.go:195] Run: openssl version
	I0116 03:33:45.387084 1017941 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9784822.pem && ln -fs /usr/share/ca-certificates/9784822.pem /etc/ssl/certs/9784822.pem"
	I0116 03:33:45.399942 1017941 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9784822.pem
	I0116 03:33:45.406371 1017941 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 16 02:10 /usr/share/ca-certificates/9784822.pem
	I0116 03:33:45.406448 1017941 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9784822.pem
	I0116 03:33:45.413983 1017941 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9784822.pem /etc/ssl/certs/3ec20f2e.0"
	I0116 03:33:45.427744 1017941 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0116 03:33:45.441322 1017941 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:33:45.449147 1017941 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 16 02:01 /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:33:45.449243 1017941 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:33:45.457305 1017941 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0116 03:33:45.472054 1017941 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/978482.pem && ln -fs /usr/share/ca-certificates/978482.pem /etc/ssl/certs/978482.pem"
	I0116 03:33:45.484413 1017941 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/978482.pem
	I0116 03:33:45.491754 1017941 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 16 02:10 /usr/share/ca-certificates/978482.pem
	I0116 03:33:45.491846 1017941 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/978482.pem
	I0116 03:33:45.500472 1017941 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/978482.pem /etc/ssl/certs/51391683.0"
	I0116 03:33:45.516386 1017941 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0116 03:33:45.524127 1017941 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0116 03:33:45.533142 1017941 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0116 03:33:45.541844 1017941 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0116 03:33:45.548530 1017941 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0116 03:33:45.554829 1017941 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0116 03:33:45.561725 1017941 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0116 03:33:45.568289 1017941 kubeadm.go:404] StartCluster: {Name:newest-cni-190843 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.29.0-rc.2 ClusterName:newest-cni-190843 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.3 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_
pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 03:33:45.568413 1017941 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0116 03:33:45.568479 1017941 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 03:33:45.614876 1017941 cri.go:89] found id: ""
	I0116 03:33:45.614969 1017941 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0116 03:33:45.626509 1017941 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0116 03:33:45.626534 1017941 kubeadm.go:636] restartCluster start
	I0116 03:33:45.626596 1017941 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0116 03:33:45.636241 1017941 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:33:45.637217 1017941 kubeconfig.go:135] verify returned: extract IP: "newest-cni-190843" does not appear in /home/jenkins/minikube-integration/17967-971255/kubeconfig
	I0116 03:33:45.637727 1017941 kubeconfig.go:146] "newest-cni-190843" context is missing from /home/jenkins/minikube-integration/17967-971255/kubeconfig - will repair!
	I0116 03:33:45.638678 1017941 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17967-971255/kubeconfig: {Name:mk5ca3018274b0a03599cf3631785c0629f81a67 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:33:45.720146 1017941 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0116 03:33:45.731891 1017941 api_server.go:166] Checking apiserver status ...
	I0116 03:33:45.731975 1017941 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:33:45.744003 1017941 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:33:46.232647 1017941 api_server.go:166] Checking apiserver status ...
	I0116 03:33:46.232763 1017941 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:33:46.246799 1017941 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:33:43.665871 1017511 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 03:33:45.527959 1017511 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.306237823s)
	I0116 03:33:45.527999 1017511 start.go:929] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I0116 03:33:45.528035 1017511 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.265092828s)
	I0116 03:33:45.528096 1017511 main.go:141] libmachine: Making call to close driver server
	I0116 03:33:45.528116 1017511 main.go:141] libmachine: (auto-278325) Calling .Close
	I0116 03:33:45.528425 1017511 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:33:45.528449 1017511 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:33:45.528464 1017511 main.go:141] libmachine: Making call to close driver server
	I0116 03:33:45.528478 1017511 main.go:141] libmachine: (auto-278325) Calling .Close
	I0116 03:33:45.528776 1017511 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:33:45.528800 1017511 main.go:141] libmachine: (auto-278325) DBG | Closing plugin on server side
	I0116 03:33:45.528809 1017511 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:33:45.539002 1017511 main.go:141] libmachine: Making call to close driver server
	I0116 03:33:45.539031 1017511 main.go:141] libmachine: (auto-278325) Calling .Close
	I0116 03:33:45.539418 1017511 main.go:141] libmachine: (auto-278325) DBG | Closing plugin on server side
	I0116 03:33:45.539470 1017511 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:33:45.539485 1017511 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:33:46.935862 1017511 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.637731696s)
	I0116 03:33:46.935925 1017511 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (3.270011149s)
	I0116 03:33:46.935930 1017511 main.go:141] libmachine: Making call to close driver server
	I0116 03:33:46.936050 1017511 main.go:141] libmachine: (auto-278325) Calling .Close
	I0116 03:33:46.936393 1017511 main.go:141] libmachine: (auto-278325) DBG | Closing plugin on server side
	I0116 03:33:46.936438 1017511 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:33:46.936449 1017511 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:33:46.936465 1017511 main.go:141] libmachine: Making call to close driver server
	I0116 03:33:46.936493 1017511 main.go:141] libmachine: (auto-278325) Calling .Close
	I0116 03:33:46.936837 1017511 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:33:46.936896 1017511 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:33:46.936897 1017511 main.go:141] libmachine: (auto-278325) DBG | Closing plugin on server side
	I0116 03:33:46.938937 1017511 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0116 03:33:46.937368 1017511 node_ready.go:35] waiting up to 15m0s for node "auto-278325" to be "Ready" ...
	I0116 03:33:46.940356 1017511 addons.go:505] enable addons completed in 3.914036784s: enabled=[default-storageclass storage-provisioner]
	I0116 03:33:46.960139 1017511 node_ready.go:49] node "auto-278325" has status "Ready":"True"
	I0116 03:33:46.960183 1017511 node_ready.go:38] duration metric: took 19.852631ms waiting for node "auto-278325" to be "Ready" ...
	I0116 03:33:46.960197 1017511 pod_ready.go:35] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:33:46.994013 1017511 pod_ready.go:78] waiting up to 15m0s for pod "coredns-5dd5756b68-6vwlf" in "kube-system" namespace to be "Ready" ...
	I0116 03:33:45.355078 1018250 main.go:141] libmachine: (kindnet-278325) DBG | domain kindnet-278325 has defined MAC address 52:54:00:6a:b2:f4 in network mk-kindnet-278325
	I0116 03:33:45.355659 1018250 main.go:141] libmachine: (kindnet-278325) DBG | unable to find current IP address of domain kindnet-278325 in network mk-kindnet-278325
	I0116 03:33:45.355694 1018250 main.go:141] libmachine: (kindnet-278325) DBG | I0116 03:33:45.355604 1018308 retry.go:31] will retry after 1.760510188s: waiting for machine to come up
	I0116 03:33:47.117608 1018250 main.go:141] libmachine: (kindnet-278325) DBG | domain kindnet-278325 has defined MAC address 52:54:00:6a:b2:f4 in network mk-kindnet-278325
	I0116 03:33:47.118165 1018250 main.go:141] libmachine: (kindnet-278325) DBG | unable to find current IP address of domain kindnet-278325 in network mk-kindnet-278325
	I0116 03:33:47.118199 1018250 main.go:141] libmachine: (kindnet-278325) DBG | I0116 03:33:47.118118 1018308 retry.go:31] will retry after 1.582357686s: waiting for machine to come up
	I0116 03:33:48.701930 1018250 main.go:141] libmachine: (kindnet-278325) DBG | domain kindnet-278325 has defined MAC address 52:54:00:6a:b2:f4 in network mk-kindnet-278325
	I0116 03:33:48.702457 1018250 main.go:141] libmachine: (kindnet-278325) DBG | unable to find current IP address of domain kindnet-278325 in network mk-kindnet-278325
	I0116 03:33:48.702495 1018250 main.go:141] libmachine: (kindnet-278325) DBG | I0116 03:33:48.702405 1018308 retry.go:31] will retry after 2.32036581s: waiting for machine to come up
	
	
	==> CRI-O <==
	-- Journal begins at Tue 2024-01-16 03:13:23 UTC, ends at Tue 2024-01-16 03:33:51 UTC. --
	Jan 16 03:33:51 default-k8s-diff-port-775571 crio[715]: time="2024-01-16 03:33:51.029901457Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f2b31947cd9ab2c8bd5934451dd476eedb4489db897be097f0a19bf23818e9da,PodSandboxId:61a74ac9505a932b4461b18658bb16bc362d6a18811776e82814571ec9db3fc8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705375136852504664,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c309131-3f2c-411d-9876-05424a2c3b79,},Annotations:map[string]string{io.kubernetes.container.hash: e101ede,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c87760cc0f44eb99f0cb2d610478aa2e69f84449726d201a30521127a679760,PodSandboxId:db7ec76550cb34c5db28c91510a33984c8e5c903f4f6acd4f9158d8a26abb56c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1705375135524094244,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-mk795,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b928a6ae-07af-4bc4-a0c5-b3027730459c,},Annotations:map[string]string{io.kubernetes.container.hash: 4c266b1d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort
\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd75d2109b8821b8657f2dd6c4f2c4ca736d012c15668e44c34657f674fd675f,PodSandboxId:c819f2cae9bceb42aecab2e15bce7bf8b11e7e40d1bdd57bed4fadb43b7241f7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1705375133697381276,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zw495,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: d2f3b2b8-ba3f-49d0-b12b-9e78c5867c09,},Annotations:map[string]string{io.kubernetes.container.hash: e69774ea,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19ca9f9fb82670ac732e41c9fb8af63d1fad24065000c55e0a0eb9ddae08738e,PodSandboxId:5fc17422f18dab54e9aea11b879963b8baac7b8a0e7719cafde40f3d7877077e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1705375112223677862,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-775571,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: 8c0409886914ac24a407c6ba44a14827,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4fca60077d67415eed36ea7fec522e468538059f422909136109bb3049be2c8,PodSandboxId:c915ddde32e8cd1b52b13209fc9f95bd71615bddc33fe6d6a7cb41d0c6322278,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1705375112016065160,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-775571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64be651799388f650e
19798b8a3d6fbb,},Annotations:map[string]string{io.kubernetes.container.hash: 92ed9e12,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cde9c38c1e733d3e9e470f1180254a447d2686e0139615245efc3f8e8a06e45,PodSandboxId:a4fbf180837a071cc7ec7173f14c2935d9dd5c7c942378868c616e45669d03b2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1705375111794029472,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-775571,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 5698deddf521f9a3979fbd1559af510a,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94ed68f3d4f24313e6a1fce6e4b90b946fd2225aac302fced35d46a9deda6a24,PodSandboxId:719bc39a7d56c604da7879cbaff8d6c0e4b256ef0bde3332acbe8aa755fbc78d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1705375111731047821,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-775571,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 259678fcd273c7ffaa6ec96a449bc3eb,},Annotations:map[string]string{io.kubernetes.container.hash: f0349e94,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=e6882855-a131-41a0-9c6d-4f6643ce3942 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 03:33:51 default-k8s-diff-port-775571 crio[715]: time="2024-01-16 03:33:51.034897949Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=28828783-c77f-4c5d-9a77-da3ec82ad1ee name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 03:33:51 default-k8s-diff-port-775571 crio[715]: time="2024-01-16 03:33:51.034984402Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=28828783-c77f-4c5d-9a77-da3ec82ad1ee name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 03:33:51 default-k8s-diff-port-775571 crio[715]: time="2024-01-16 03:33:51.035162897Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f2b31947cd9ab2c8bd5934451dd476eedb4489db897be097f0a19bf23818e9da,PodSandboxId:61a74ac9505a932b4461b18658bb16bc362d6a18811776e82814571ec9db3fc8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705375136852504664,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c309131-3f2c-411d-9876-05424a2c3b79,},Annotations:map[string]string{io.kubernetes.container.hash: e101ede,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c87760cc0f44eb99f0cb2d610478aa2e69f84449726d201a30521127a679760,PodSandboxId:db7ec76550cb34c5db28c91510a33984c8e5c903f4f6acd4f9158d8a26abb56c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1705375135524094244,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-mk795,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b928a6ae-07af-4bc4-a0c5-b3027730459c,},Annotations:map[string]string{io.kubernetes.container.hash: 4c266b1d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort
\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd75d2109b8821b8657f2dd6c4f2c4ca736d012c15668e44c34657f674fd675f,PodSandboxId:c819f2cae9bceb42aecab2e15bce7bf8b11e7e40d1bdd57bed4fadb43b7241f7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1705375133697381276,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zw495,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: d2f3b2b8-ba3f-49d0-b12b-9e78c5867c09,},Annotations:map[string]string{io.kubernetes.container.hash: e69774ea,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19ca9f9fb82670ac732e41c9fb8af63d1fad24065000c55e0a0eb9ddae08738e,PodSandboxId:5fc17422f18dab54e9aea11b879963b8baac7b8a0e7719cafde40f3d7877077e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1705375112223677862,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-775571,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: 8c0409886914ac24a407c6ba44a14827,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4fca60077d67415eed36ea7fec522e468538059f422909136109bb3049be2c8,PodSandboxId:c915ddde32e8cd1b52b13209fc9f95bd71615bddc33fe6d6a7cb41d0c6322278,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1705375112016065160,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-775571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64be651799388f650e
19798b8a3d6fbb,},Annotations:map[string]string{io.kubernetes.container.hash: 92ed9e12,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cde9c38c1e733d3e9e470f1180254a447d2686e0139615245efc3f8e8a06e45,PodSandboxId:a4fbf180837a071cc7ec7173f14c2935d9dd5c7c942378868c616e45669d03b2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1705375111794029472,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-775571,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 5698deddf521f9a3979fbd1559af510a,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94ed68f3d4f24313e6a1fce6e4b90b946fd2225aac302fced35d46a9deda6a24,PodSandboxId:719bc39a7d56c604da7879cbaff8d6c0e4b256ef0bde3332acbe8aa755fbc78d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1705375111731047821,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-775571,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 259678fcd273c7ffaa6ec96a449bc3eb,},Annotations:map[string]string{io.kubernetes.container.hash: f0349e94,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=28828783-c77f-4c5d-9a77-da3ec82ad1ee name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 03:33:51 default-k8s-diff-port-775571 crio[715]: time="2024-01-16 03:33:51.036112015Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:f2b31947cd9ab2c8bd5934451dd476eedb4489db897be097f0a19bf23818e9da,Verbose:false,}" file="go-grpc-middleware/chain.go:25" id=79b02454-6a7c-4e9d-a1ec-fcc7e1e7b96a name=/runtime.v1.RuntimeService/ContainerStatus
	Jan 16 03:33:51 default-k8s-diff-port-775571 crio[715]: time="2024-01-16 03:33:51.036279849Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:f2b31947cd9ab2c8bd5934451dd476eedb4489db897be097f0a19bf23818e9da,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1705375136994406666,StartedAt:1705375137036306496,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:gcr.io/k8s-minikube/storage-provisioner:v5,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c309131-3f2c-411d-9876-05424a2c3b79,},Annotations:map[string]string{io.kubernetes.container.hash: e101ede,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationM
essagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/tmp,HostPath:/tmp,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/8c309131-3f2c-411d-9876-05424a2c3b79/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/8c309131-3f2c-411d-9876-05424a2c3b79/containers/storage-provisioner/64c37724,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/8c309131-3f2c-411d-9876-05424a2c3b79/volumes/kubernetes.io~projected/kube-api-access-7xrht,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},},LogPath:/var/log/pods/kube-system_storage-provisioner_8c309131-3f2c-411d-9876-05424a2c3b79/storage-prov
isioner/0.log,},Info:map[string]string{},}" file="go-grpc-middleware/chain.go:25" id=79b02454-6a7c-4e9d-a1ec-fcc7e1e7b96a name=/runtime.v1.RuntimeService/ContainerStatus
	Jan 16 03:33:51 default-k8s-diff-port-775571 crio[715]: time="2024-01-16 03:33:51.037107923Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:8c87760cc0f44eb99f0cb2d610478aa2e69f84449726d201a30521127a679760,Verbose:false,}" file="go-grpc-middleware/chain.go:25" id=92b1d1a5-61d1-4b86-adc2-73f8832f3599 name=/runtime.v1.RuntimeService/ContainerStatus
	Jan 16 03:33:51 default-k8s-diff-port-775571 crio[715]: time="2024-01-16 03:33:51.037235753Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:8c87760cc0f44eb99f0cb2d610478aa2e69f84449726d201a30521127a679760,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1705375135782355134,StartedAt:1705375135871621527,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/coredns/coredns:v1.10.1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-mk795,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b928a6ae-07af-4bc4-a0c5-b3027730459c,},Annotations:map[string]string{io.kubernetes.container.hash: 4c266b1d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/coredns,HostPath:/var/lib/kubelet/pods/b928a6ae-07af-4bc4-a0c5-b3027730459c/volumes/kubernetes.io~configmap/config-volume,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/b928a6ae-07af-4bc4-a0c5-b3027730459c/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/b928a6ae-07af-4bc4-a0c5-b3027730459c/containers/coredns/21ea21f9,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,Host
Path:/var/lib/kubelet/pods/b928a6ae-07af-4bc4-a0c5-b3027730459c/volumes/kubernetes.io~projected/kube-api-access-l5xc2,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},},LogPath:/var/log/pods/kube-system_coredns-5dd5756b68-mk795_b928a6ae-07af-4bc4-a0c5-b3027730459c/coredns/0.log,},Info:map[string]string{},}" file="go-grpc-middleware/chain.go:25" id=92b1d1a5-61d1-4b86-adc2-73f8832f3599 name=/runtime.v1.RuntimeService/ContainerStatus
	Jan 16 03:33:51 default-k8s-diff-port-775571 crio[715]: time="2024-01-16 03:33:51.037992470Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:cd75d2109b8821b8657f2dd6c4f2c4ca736d012c15668e44c34657f674fd675f,Verbose:false,}" file="go-grpc-middleware/chain.go:25" id=2c20df29-0a07-416f-92c3-6f05dc59c780 name=/runtime.v1.RuntimeService/ContainerStatus
	Jan 16 03:33:51 default-k8s-diff-port-775571 crio[715]: time="2024-01-16 03:33:51.038197995Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:cd75d2109b8821b8657f2dd6c4f2c4ca736d012c15668e44c34657f674fd675f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1705375134576885916,StartedAt:1705375134849889602,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-proxy:v1.28.4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zw495,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2f3b2b8-ba3f-49d0-b12b-9e78c5867c09,},Annotations:map[string]string{io.kubernetes.container.hash: e69774ea,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kuber
netes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/run/xtables.lock,HostPath:/run/xtables.lock,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/lib/modules,HostPath:/lib/modules,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/d2f3b2b8-ba3f-49d0-b12b-9e78c5867c09/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/d2f3b2b8-ba3f-49d0-b12b-9e78c5867c09/containers/kube-proxy/6060704b,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/var/lib/kube-proxy,HostPath:/var/lib/kubelet/pods/d2f3b2b8-ba3f-49d0-b12b-9e78c5867c09/volumes/kubernetes.io~configmap/kube-proxy,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/var/run/secrets/kub
ernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/d2f3b2b8-ba3f-49d0-b12b-9e78c5867c09/volumes/kubernetes.io~projected/kube-api-access-qv4fj,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},},LogPath:/var/log/pods/kube-system_kube-proxy-zw495_d2f3b2b8-ba3f-49d0-b12b-9e78c5867c09/kube-proxy/0.log,},Info:map[string]string{},}" file="go-grpc-middleware/chain.go:25" id=2c20df29-0a07-416f-92c3-6f05dc59c780 name=/runtime.v1.RuntimeService/ContainerStatus
	Jan 16 03:33:51 default-k8s-diff-port-775571 crio[715]: time="2024-01-16 03:33:51.038872890Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:19ca9f9fb82670ac732e41c9fb8af63d1fad24065000c55e0a0eb9ddae08738e,Verbose:false,}" file="go-grpc-middleware/chain.go:25" id=7fb73cd4-734c-4507-8aab-d8978f64a238 name=/runtime.v1.RuntimeService/ContainerStatus
	Jan 16 03:33:51 default-k8s-diff-port-775571 crio[715]: time="2024-01-16 03:33:51.038984333Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:19ca9f9fb82670ac732e41c9fb8af63d1fad24065000c55e0a0eb9ddae08738e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},State:CONTAINER_RUNNING,CreatedAt:1705375112479785018,StartedAt:1705375113948307398,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-scheduler:v1.28.4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-775571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c0409886914ac24a407c6ba44a14827,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/8c0409886914ac24a407c6ba44a14827/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/8c0409886914ac24a407c6ba44a14827/containers/kube-scheduler/30e71111,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/etc/kubernetes/scheduler.conf,HostPath:/etc/kubernetes/scheduler.conf,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},},LogPath:/var/log/pods/kube-system_kube-scheduler-default-k8s-diff-port-775571_8c0409886914ac24a407c6ba44a14827/kube-scheduler/2.log,},Info:map[string]string{},}" file="go-grpc-middleware/chain.go:25" id=7fb73cd4-734c-4507-8aab-d8978f64a238 name=/runtime.v1.RuntimeService/ContainerStatus
	Jan 16 03:33:51 default-k8s-diff-port-775571 crio[715]: time="2024-01-16 03:33:51.039672253Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:c4fca60077d67415eed36ea7fec522e468538059f422909136109bb3049be2c8,Verbose:false,}" file="go-grpc-middleware/chain.go:25" id=bbaf2f7b-9bee-4203-9e52-e9ee56979906 name=/runtime.v1.RuntimeService/ContainerStatus
	Jan 16 03:33:51 default-k8s-diff-port-775571 crio[715]: time="2024-01-16 03:33:51.039793060Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:c4fca60077d67415eed36ea7fec522e468538059f422909136109bb3049be2c8,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},State:CONTAINER_RUNNING,CreatedAt:1705375112261963808,StartedAt:1705375113699244834,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/etcd:3.5.9-0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-775571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64be651799388f650e19798b8a3d6fbb,},Annotations:map[string]string{io.kubernetes.container.hash: 92ed9e12,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.conta
iner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/64be651799388f650e19798b8a3d6fbb/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/64be651799388f650e19798b8a3d6fbb/containers/etcd/25dfbde9,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/var/lib/minikube/etcd,HostPath:/var/lib/minikube/etcd,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/var/lib/minikube/certs/etcd,HostPath:/var/lib/minikube/certs/etcd,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},},LogPath:/var/log/pods/kube-system_etcd-default-k8s-diff-port-775571_64be651799388f650e19798b8a3d6fbb/etcd/2.log,},Info:map[string]string{},}" file="go-grpc-middleware/chain.go:25" id=bbaf2f7b-9bee-4203-9e52-e9ee56979906 name=/runtime.v1.Runti
meService/ContainerStatus
	Jan 16 03:33:51 default-k8s-diff-port-775571 crio[715]: time="2024-01-16 03:33:51.040634902Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:7cde9c38c1e733d3e9e470f1180254a447d2686e0139615245efc3f8e8a06e45,Verbose:false,}" file="go-grpc-middleware/chain.go:25" id=c8f5379c-cfdf-45bc-b68f-ddf848caf4d4 name=/runtime.v1.RuntimeService/ContainerStatus
	Jan 16 03:33:51 default-k8s-diff-port-775571 crio[715]: time="2024-01-16 03:33:51.041020554Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:7cde9c38c1e733d3e9e470f1180254a447d2686e0139615245efc3f8e8a06e45,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},State:CONTAINER_RUNNING,CreatedAt:1705375111982069117,StartedAt:1705375113146170323,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-controller-manager:v1.28.4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-775571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5698deddf521f9a3979fbd1559af510a,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCo
unt: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/5698deddf521f9a3979fbd1559af510a/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/5698deddf521f9a3979fbd1559af510a/containers/kube-controller-manager/5f8580a3,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/etc/ssl/certs,HostPath:/etc/ssl/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/etc/kubernetes/controller-manager.conf,HostPath:/etc/kubernetes/controller-manager.conf,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/usr/share/ca-certificates,HostPath:/usr/share/ca-certificates,Readonly:true,SelinuxRelabel:false,Propaga
tion:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/var/lib/minikube/certs,HostPath:/var/lib/minikube/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/usr/libexec/kubernetes/kubelet-plugins/volume/exec,HostPath:/usr/libexec/kubernetes/kubelet-plugins/volume/exec,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},},LogPath:/var/log/pods/kube-system_kube-controller-manager-default-k8s-diff-port-775571_5698deddf521f9a3979fbd1559af510a/kube-controller-manager/2.log,},Info:map[string]string{},}" file="go-grpc-middleware/chain.go:25" id=c8f5379c-cfdf-45bc-b68f-ddf848caf4d4 name=/runtime.v1.RuntimeService/ContainerStatus
	Jan 16 03:33:51 default-k8s-diff-port-775571 crio[715]: time="2024-01-16 03:33:51.041784122Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:94ed68f3d4f24313e6a1fce6e4b90b946fd2225aac302fced35d46a9deda6a24,Verbose:false,}" file="go-grpc-middleware/chain.go:25" id=8d6f023c-3ae5-4e2b-aa35-acaa114ca892 name=/runtime.v1.RuntimeService/ContainerStatus
	Jan 16 03:33:51 default-k8s-diff-port-775571 crio[715]: time="2024-01-16 03:33:51.041898262Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:94ed68f3d4f24313e6a1fce6e4b90b946fd2225aac302fced35d46a9deda6a24,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},State:CONTAINER_RUNNING,CreatedAt:1705375111881558429,StartedAt:1705375112737662589,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-apiserver:v1.28.4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-775571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 259678fcd273c7ffaa6ec96a449bc3eb,},Annotations:map[string]string{io.kubernetes.container.hash: f0349e94,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/259678fcd273c7ffaa6ec96a449bc3eb/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/259678fcd273c7ffaa6ec96a449bc3eb/containers/kube-apiserver/a02b8694,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/etc/ssl/certs,HostPath:/etc/ssl/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/usr/share/ca-certificates,HostPath:/usr/share/ca-certificates,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/var/lib/minikube/certs,HostPath:/var/lib/minikube/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},},LogPath:/var/log/pods/kube-system_kube-apiserver-default-
k8s-diff-port-775571_259678fcd273c7ffaa6ec96a449bc3eb/kube-apiserver/2.log,},Info:map[string]string{},}" file="go-grpc-middleware/chain.go:25" id=8d6f023c-3ae5-4e2b-aa35-acaa114ca892 name=/runtime.v1.RuntimeService/ContainerStatus
	Jan 16 03:33:51 default-k8s-diff-port-775571 crio[715]: time="2024-01-16 03:33:51.078729848Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=e9fe0164-cfc5-4b0e-bc50-68f8f74be7e0 name=/runtime.v1.RuntimeService/Version
	Jan 16 03:33:51 default-k8s-diff-port-775571 crio[715]: time="2024-01-16 03:33:51.078796161Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=e9fe0164-cfc5-4b0e-bc50-68f8f74be7e0 name=/runtime.v1.RuntimeService/Version
	Jan 16 03:33:51 default-k8s-diff-port-775571 crio[715]: time="2024-01-16 03:33:51.080202222Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=03526633-9996-40a4-91e9-788ff8a5d31a name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 03:33:51 default-k8s-diff-port-775571 crio[715]: time="2024-01-16 03:33:51.080629490Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705376031080549044,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=03526633-9996-40a4-91e9-788ff8a5d31a name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 03:33:51 default-k8s-diff-port-775571 crio[715]: time="2024-01-16 03:33:51.081190611Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=397bf120-b79b-4ce3-a512-8c97b9f5c6aa name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 03:33:51 default-k8s-diff-port-775571 crio[715]: time="2024-01-16 03:33:51.081236122Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=397bf120-b79b-4ce3-a512-8c97b9f5c6aa name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 03:33:51 default-k8s-diff-port-775571 crio[715]: time="2024-01-16 03:33:51.081401453Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f2b31947cd9ab2c8bd5934451dd476eedb4489db897be097f0a19bf23818e9da,PodSandboxId:61a74ac9505a932b4461b18658bb16bc362d6a18811776e82814571ec9db3fc8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705375136852504664,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c309131-3f2c-411d-9876-05424a2c3b79,},Annotations:map[string]string{io.kubernetes.container.hash: e101ede,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c87760cc0f44eb99f0cb2d610478aa2e69f84449726d201a30521127a679760,PodSandboxId:db7ec76550cb34c5db28c91510a33984c8e5c903f4f6acd4f9158d8a26abb56c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1705375135524094244,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-mk795,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b928a6ae-07af-4bc4-a0c5-b3027730459c,},Annotations:map[string]string{io.kubernetes.container.hash: 4c266b1d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort
\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd75d2109b8821b8657f2dd6c4f2c4ca736d012c15668e44c34657f674fd675f,PodSandboxId:c819f2cae9bceb42aecab2e15bce7bf8b11e7e40d1bdd57bed4fadb43b7241f7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1705375133697381276,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zw495,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: d2f3b2b8-ba3f-49d0-b12b-9e78c5867c09,},Annotations:map[string]string{io.kubernetes.container.hash: e69774ea,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19ca9f9fb82670ac732e41c9fb8af63d1fad24065000c55e0a0eb9ddae08738e,PodSandboxId:5fc17422f18dab54e9aea11b879963b8baac7b8a0e7719cafde40f3d7877077e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1705375112223677862,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-775571,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: 8c0409886914ac24a407c6ba44a14827,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4fca60077d67415eed36ea7fec522e468538059f422909136109bb3049be2c8,PodSandboxId:c915ddde32e8cd1b52b13209fc9f95bd71615bddc33fe6d6a7cb41d0c6322278,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1705375112016065160,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-775571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64be651799388f650e
19798b8a3d6fbb,},Annotations:map[string]string{io.kubernetes.container.hash: 92ed9e12,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cde9c38c1e733d3e9e470f1180254a447d2686e0139615245efc3f8e8a06e45,PodSandboxId:a4fbf180837a071cc7ec7173f14c2935d9dd5c7c942378868c616e45669d03b2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1705375111794029472,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-775571,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 5698deddf521f9a3979fbd1559af510a,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94ed68f3d4f24313e6a1fce6e4b90b946fd2225aac302fced35d46a9deda6a24,PodSandboxId:719bc39a7d56c604da7879cbaff8d6c0e4b256ef0bde3332acbe8aa755fbc78d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1705375111731047821,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-775571,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 259678fcd273c7ffaa6ec96a449bc3eb,},Annotations:map[string]string{io.kubernetes.container.hash: f0349e94,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=397bf120-b79b-4ce3-a512-8c97b9f5c6aa name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f2b31947cd9ab       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 minutes ago      Running             storage-provisioner       0                   61a74ac9505a9       storage-provisioner
	8c87760cc0f44       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   14 minutes ago      Running             coredns                   0                   db7ec76550cb3       coredns-5dd5756b68-mk795
	cd75d2109b882       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e   14 minutes ago      Running             kube-proxy                0                   c819f2cae9bce       kube-proxy-zw495
	19ca9f9fb8267       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1   15 minutes ago      Running             kube-scheduler            2                   5fc17422f18da       kube-scheduler-default-k8s-diff-port-775571
	c4fca60077d67       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   15 minutes ago      Running             etcd                      2                   c915ddde32e8c       etcd-default-k8s-diff-port-775571
	7cde9c38c1e73       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591   15 minutes ago      Running             kube-controller-manager   2                   a4fbf180837a0       kube-controller-manager-default-k8s-diff-port-775571
	94ed68f3d4f24       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257   15 minutes ago      Running             kube-apiserver            2                   719bc39a7d56c       kube-apiserver-default-k8s-diff-port-775571
	
	
	==> coredns [8c87760cc0f44eb99f0cb2d610478aa2e69f84449726d201a30521127a679760] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = efc9280483fcc4efbcb768f0bb0eae8d655d9a6698f75ec2e812845ddebe357ede76757b8e27e6a6ace121d67e74d46126ca7de2feaa2fdd891a0e8e676dd4cb
	[INFO] Reloading complete
	[INFO] 127.0.0.1:57520 - 6359 "HINFO IN 6562304830807243736.8044346787423104161. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009863034s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-775571
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-775571
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=880ec6d67bbc7fe8882aaa6543d5a07427c973fd
	                    minikube.k8s.io/name=default-k8s-diff-port-775571
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_16T03_18_40_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Jan 2024 03:18:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-775571
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Jan 2024 03:33:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Jan 2024 03:29:14 +0000   Tue, 16 Jan 2024 03:18:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Jan 2024 03:29:14 +0000   Tue, 16 Jan 2024 03:18:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Jan 2024 03:29:14 +0000   Tue, 16 Jan 2024 03:18:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Jan 2024 03:29:14 +0000   Tue, 16 Jan 2024 03:18:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.158
	  Hostname:    default-k8s-diff-port-775571
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 16cfbd7b5e9d4c779239e348cab0eaeb
	  System UUID:                16cfbd7b-5e9d-4c77-9239-e348cab0eaeb
	  Boot ID:                    46f4f379-8263-499e-bd43-2573973e73a1
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-mk795                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 etcd-default-k8s-diff-port-775571                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 kube-apiserver-default-k8s-diff-port-775571             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-775571    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-proxy-zw495                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-scheduler-default-k8s-diff-port-775571             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 metrics-server-57f55c9bc5-928d7                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 14m                kube-proxy       
	  Normal  NodeHasSufficientMemory  15m (x8 over 15m)  kubelet          Node default-k8s-diff-port-775571 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m (x8 over 15m)  kubelet          Node default-k8s-diff-port-775571 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m (x7 over 15m)  kubelet          Node default-k8s-diff-port-775571 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 15m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  15m                kubelet          Node default-k8s-diff-port-775571 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m                kubelet          Node default-k8s-diff-port-775571 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m                kubelet          Node default-k8s-diff-port-775571 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             15m                kubelet          Node default-k8s-diff-port-775571 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                15m                kubelet          Node default-k8s-diff-port-775571 status is now: NodeReady
	  Normal  RegisteredNode           15m                node-controller  Node default-k8s-diff-port-775571 event: Registered Node default-k8s-diff-port-775571 in Controller
	
	
	==> dmesg <==
	[Jan16 03:13] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.073507] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.929847] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.645523] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.153043] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.492146] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.282122] systemd-fstab-generator[641]: Ignoring "noauto" for root device
	[  +0.156855] systemd-fstab-generator[652]: Ignoring "noauto" for root device
	[  +0.201235] systemd-fstab-generator[665]: Ignoring "noauto" for root device
	[  +0.169795] systemd-fstab-generator[676]: Ignoring "noauto" for root device
	[  +0.281013] systemd-fstab-generator[700]: Ignoring "noauto" for root device
	[ +18.101883] systemd-fstab-generator[915]: Ignoring "noauto" for root device
	[Jan16 03:14] kauditd_printk_skb: 29 callbacks suppressed
	[Jan16 03:18] systemd-fstab-generator[3538]: Ignoring "noauto" for root device
	[  +9.816044] systemd-fstab-generator[3865]: Ignoring "noauto" for root device
	[ +14.143912] kauditd_printk_skb: 2 callbacks suppressed
	
	
	==> etcd [c4fca60077d67415eed36ea7fec522e468538059f422909136109bb3049be2c8] <==
	{"level":"info","ts":"2024-01-16T03:18:34.106788Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-16T03:18:34.110963Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"244d86dcb1337571","local-member-attributes":"{Name:default-k8s-diff-port-775571 ClientURLs:[https://192.168.72.158:2379]}","request-path":"/0/members/244d86dcb1337571/attributes","cluster-id":"c08228541f5dd967","publish-timeout":"7s"}
	{"level":"info","ts":"2024-01-16T03:18:34.111514Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-16T03:18:34.112717Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"c08228541f5dd967","local-member-id":"244d86dcb1337571","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-16T03:18:34.112809Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-16T03:18:34.112861Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-16T03:18:34.112905Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-16T03:18:34.113903Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-01-16T03:18:34.116673Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-01-16T03:18:34.116767Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-01-16T03:18:34.137975Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.158:2379"}
	{"level":"info","ts":"2024-01-16T03:28:34.587354Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":715}
	{"level":"info","ts":"2024-01-16T03:28:34.591275Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":715,"took":"3.026721ms","hash":526033480}
	{"level":"info","ts":"2024-01-16T03:28:34.591374Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":526033480,"revision":715,"compact-revision":-1}
	{"level":"info","ts":"2024-01-16T03:33:14.749542Z","caller":"traceutil/trace.go:171","msg":"trace[1250741450] transaction","detail":"{read_only:false; response_revision:1185; number_of_response:1; }","duration":"134.025997ms","start":"2024-01-16T03:33:14.615452Z","end":"2024-01-16T03:33:14.749478Z","steps":["trace[1250741450] 'process raft request'  (duration: 133.498692ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-16T03:33:15.017449Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"190.786315ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8462700275858300513 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1184 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1040 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-01-16T03:33:15.017751Z","caller":"traceutil/trace.go:171","msg":"trace[1664708238] linearizableReadLoop","detail":"{readStateIndex:1377; appliedIndex:1376; }","duration":"116.817492ms","start":"2024-01-16T03:33:14.900851Z","end":"2024-01-16T03:33:15.017668Z","steps":["trace[1664708238] 'read index received'  (duration: 27.761µs)","trace[1664708238] 'applied index is now lower than readState.Index'  (duration: 116.787988ms)"],"step_count":2}
	{"level":"warn","ts":"2024-01-16T03:33:15.017827Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"116.991762ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-01-16T03:33:15.017858Z","caller":"traceutil/trace.go:171","msg":"trace[87613774] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:1186; }","duration":"117.020242ms","start":"2024-01-16T03:33:14.90082Z","end":"2024-01-16T03:33:15.01784Z","steps":["trace[87613774] 'agreement among raft nodes before linearized reading'  (duration: 116.967673ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-16T03:33:15.018096Z","caller":"traceutil/trace.go:171","msg":"trace[1843656565] transaction","detail":"{read_only:false; response_revision:1186; number_of_response:1; }","duration":"261.214154ms","start":"2024-01-16T03:33:14.756869Z","end":"2024-01-16T03:33:15.018083Z","steps":["trace[1843656565] 'process raft request'  (duration: 68.493657ms)","trace[1843656565] 'compare'  (duration: 190.590479ms)"],"step_count":2}
	{"level":"info","ts":"2024-01-16T03:33:34.596546Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":958}
	{"level":"info","ts":"2024-01-16T03:33:34.598335Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":958,"took":"1.292042ms","hash":2099289859}
	{"level":"info","ts":"2024-01-16T03:33:34.598498Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2099289859,"revision":958,"compact-revision":715}
	{"level":"info","ts":"2024-01-16T03:33:45.950089Z","caller":"traceutil/trace.go:171","msg":"trace[170779560] transaction","detail":"{read_only:false; response_revision:1211; number_of_response:1; }","duration":"484.522176ms","start":"2024-01-16T03:33:45.465523Z","end":"2024-01-16T03:33:45.950045Z","steps":["trace[170779560] 'process raft request'  (duration: 483.945266ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-16T03:33:45.950492Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-16T03:33:45.465502Z","time spent":"484.836796ms","remote":"127.0.0.1:56698","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":693,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-n2an2xzh626kkst5gvnel3nnoq\" mod_revision:1202 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-n2an2xzh626kkst5gvnel3nnoq\" value_size:620 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-n2an2xzh626kkst5gvnel3nnoq\" > >"}
	
	
	==> kernel <==
	 03:33:51 up 20 min,  0 users,  load average: 0.30, 0.24, 0.23
	Linux default-k8s-diff-port-775571 5.10.57 #1 SMP Thu Dec 28 22:04:21 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [94ed68f3d4f24313e6a1fce6e4b90b946fd2225aac302fced35d46a9deda6a24] <==
	E0116 03:29:37.612656       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0116 03:29:37.612671       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0116 03:30:36.433531       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0116 03:31:36.433152       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0116 03:31:37.611656       1 handler_proxy.go:93] no RequestInfo found in the context
	E0116 03:31:37.611821       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0116 03:31:37.611878       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0116 03:31:37.613878       1 handler_proxy.go:93] no RequestInfo found in the context
	E0116 03:31:37.614004       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0116 03:31:37.614047       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0116 03:32:36.433343       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0116 03:33:36.433502       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0116 03:33:36.616002       1 handler_proxy.go:93] no RequestInfo found in the context
	E0116 03:33:36.616152       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0116 03:33:36.616782       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0116 03:33:37.616355       1 handler_proxy.go:93] no RequestInfo found in the context
	E0116 03:33:37.616409       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0116 03:33:37.616418       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0116 03:33:37.616491       1 handler_proxy.go:93] no RequestInfo found in the context
	E0116 03:33:37.616671       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0116 03:33:37.617935       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [7cde9c38c1e733d3e9e470f1180254a447d2686e0139615245efc3f8e8a06e45] <==
	I0116 03:27:52.200395       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0116 03:28:21.769142       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0116 03:28:22.209772       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0116 03:28:51.777032       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0116 03:28:52.220717       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0116 03:29:21.783278       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0116 03:29:22.229810       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0116 03:29:51.793373       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0116 03:29:52.240241       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0116 03:30:08.137974       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="401.929µs"
	I0116 03:30:21.132991       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="149.476µs"
	E0116 03:30:21.799809       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0116 03:30:22.254471       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0116 03:30:51.814090       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0116 03:30:52.265782       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0116 03:31:21.822651       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0116 03:31:22.277334       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0116 03:31:51.830920       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0116 03:31:52.287703       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0116 03:32:21.837760       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0116 03:32:22.300293       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0116 03:32:51.847936       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0116 03:32:52.311976       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0116 03:33:21.857077       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0116 03:33:22.326976       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [cd75d2109b8821b8657f2dd6c4f2c4ca736d012c15668e44c34657f674fd675f] <==
	I0116 03:18:56.536685       1 server_others.go:69] "Using iptables proxy"
	I0116 03:18:56.614974       1 node.go:141] Successfully retrieved node IP: 192.168.72.158
	I0116 03:18:56.809256       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0116 03:18:56.809399       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0116 03:18:56.813336       1 server_others.go:152] "Using iptables Proxier"
	I0116 03:18:56.813907       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0116 03:18:56.817464       1 server.go:846] "Version info" version="v1.28.4"
	I0116 03:18:56.817888       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0116 03:18:56.839188       1 config.go:188] "Starting service config controller"
	I0116 03:18:56.840302       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0116 03:18:56.840389       1 config.go:97] "Starting endpoint slice config controller"
	I0116 03:18:56.840399       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0116 03:18:56.857121       1 config.go:315] "Starting node config controller"
	I0116 03:18:56.857277       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0116 03:18:56.941278       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0116 03:18:56.943076       1 shared_informer.go:318] Caches are synced for service config
	I0116 03:18:56.961222       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [19ca9f9fb82670ac732e41c9fb8af63d1fad24065000c55e0a0eb9ddae08738e] <==
	W0116 03:18:37.635976       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0116 03:18:37.636039       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0116 03:18:37.676555       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0116 03:18:37.676692       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0116 03:18:37.700836       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0116 03:18:37.700961       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0116 03:18:37.721544       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0116 03:18:37.721750       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0116 03:18:37.775958       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0116 03:18:37.776067       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0116 03:18:37.833044       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0116 03:18:37.833163       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0116 03:18:37.847794       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0116 03:18:37.847889       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0116 03:18:37.851160       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0116 03:18:37.851231       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0116 03:18:37.949133       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0116 03:18:37.949224       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0116 03:18:37.979886       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0116 03:18:37.980003       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0116 03:18:38.010744       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0116 03:18:38.010842       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0116 03:18:38.111757       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0116 03:18:38.111822       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0116 03:18:40.889858       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Tue 2024-01-16 03:13:23 UTC, ends at Tue 2024-01-16 03:33:51 UTC. --
	Jan 16 03:31:17 default-k8s-diff-port-775571 kubelet[3872]: E0116 03:31:17.113654    3872 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-928d7" podUID="d3671063-27a1-4ad8-9f5f-b3e01205f483"
	Jan 16 03:31:32 default-k8s-diff-port-775571 kubelet[3872]: E0116 03:31:32.112943    3872 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-928d7" podUID="d3671063-27a1-4ad8-9f5f-b3e01205f483"
	Jan 16 03:31:40 default-k8s-diff-port-775571 kubelet[3872]: E0116 03:31:40.224531    3872 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 16 03:31:40 default-k8s-diff-port-775571 kubelet[3872]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 16 03:31:40 default-k8s-diff-port-775571 kubelet[3872]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 16 03:31:40 default-k8s-diff-port-775571 kubelet[3872]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 16 03:31:47 default-k8s-diff-port-775571 kubelet[3872]: E0116 03:31:47.112745    3872 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-928d7" podUID="d3671063-27a1-4ad8-9f5f-b3e01205f483"
	Jan 16 03:31:59 default-k8s-diff-port-775571 kubelet[3872]: E0116 03:31:59.115022    3872 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-928d7" podUID="d3671063-27a1-4ad8-9f5f-b3e01205f483"
	Jan 16 03:32:12 default-k8s-diff-port-775571 kubelet[3872]: E0116 03:32:12.117242    3872 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-928d7" podUID="d3671063-27a1-4ad8-9f5f-b3e01205f483"
	Jan 16 03:32:27 default-k8s-diff-port-775571 kubelet[3872]: E0116 03:32:27.113148    3872 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-928d7" podUID="d3671063-27a1-4ad8-9f5f-b3e01205f483"
	Jan 16 03:32:38 default-k8s-diff-port-775571 kubelet[3872]: E0116 03:32:38.112431    3872 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-928d7" podUID="d3671063-27a1-4ad8-9f5f-b3e01205f483"
	Jan 16 03:32:40 default-k8s-diff-port-775571 kubelet[3872]: E0116 03:32:40.224806    3872 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 16 03:32:40 default-k8s-diff-port-775571 kubelet[3872]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 16 03:32:40 default-k8s-diff-port-775571 kubelet[3872]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 16 03:32:40 default-k8s-diff-port-775571 kubelet[3872]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 16 03:32:53 default-k8s-diff-port-775571 kubelet[3872]: E0116 03:32:53.113189    3872 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-928d7" podUID="d3671063-27a1-4ad8-9f5f-b3e01205f483"
	Jan 16 03:33:07 default-k8s-diff-port-775571 kubelet[3872]: E0116 03:33:07.112963    3872 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-928d7" podUID="d3671063-27a1-4ad8-9f5f-b3e01205f483"
	Jan 16 03:33:19 default-k8s-diff-port-775571 kubelet[3872]: E0116 03:33:19.114262    3872 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-928d7" podUID="d3671063-27a1-4ad8-9f5f-b3e01205f483"
	Jan 16 03:33:30 default-k8s-diff-port-775571 kubelet[3872]: E0116 03:33:30.113793    3872 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-928d7" podUID="d3671063-27a1-4ad8-9f5f-b3e01205f483"
	Jan 16 03:33:40 default-k8s-diff-port-775571 kubelet[3872]: E0116 03:33:40.230767    3872 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 16 03:33:40 default-k8s-diff-port-775571 kubelet[3872]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 16 03:33:40 default-k8s-diff-port-775571 kubelet[3872]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 16 03:33:40 default-k8s-diff-port-775571 kubelet[3872]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 16 03:33:40 default-k8s-diff-port-775571 kubelet[3872]: E0116 03:33:40.320951    3872 container_manager_linux.go:514] "Failed to find cgroups of kubelet" err="cpu and memory cgroup hierarchy not unified.  cpu: /, memory: /system.slice/kubelet.service"
	Jan 16 03:33:43 default-k8s-diff-port-775571 kubelet[3872]: E0116 03:33:43.114002    3872 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-928d7" podUID="d3671063-27a1-4ad8-9f5f-b3e01205f483"
	
	
	==> storage-provisioner [f2b31947cd9ab2c8bd5934451dd476eedb4489db897be097f0a19bf23818e9da] <==
	I0116 03:18:57.087281       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0116 03:18:57.107530       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0116 03:18:57.107759       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0116 03:18:57.122077       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0116 03:18:57.122324       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-775571_9420944d-9631-4a43-8dbd-48fb909c7d8a!
	I0116 03:18:57.130264       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0445ba12-cf52-479b-873a-eccc1627ec07", APIVersion:"v1", ResourceVersion:"458", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-775571_9420944d-9631-4a43-8dbd-48fb909c7d8a became leader
	I0116 03:18:57.223212       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-775571_9420944d-9631-4a43-8dbd-48fb909c7d8a!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-775571 -n default-k8s-diff-port-775571
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-775571 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-928d7
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-775571 describe pod metrics-server-57f55c9bc5-928d7
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-775571 describe pod metrics-server-57f55c9bc5-928d7: exit status 1 (87.838303ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-928d7" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-775571 describe pod metrics-server-57f55c9bc5-928d7: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (77.47s)

                                                
                                    

Test pass (248/309)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 7.16
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.08
9 TestDownloadOnly/v1.16.0/DeleteAll 0.15
10 TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds 0.15
12 TestDownloadOnly/v1.28.4/json-events 4.66
13 TestDownloadOnly/v1.28.4/preload-exists 0
17 TestDownloadOnly/v1.28.4/LogsDuration 0.08
18 TestDownloadOnly/v1.28.4/DeleteAll 0.15
19 TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds 0.15
21 TestDownloadOnly/v1.29.0-rc.2/json-events 3.9
22 TestDownloadOnly/v1.29.0-rc.2/preload-exists 0
26 TestDownloadOnly/v1.29.0-rc.2/LogsDuration 0.08
27 TestDownloadOnly/v1.29.0-rc.2/DeleteAll 0.16
28 TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds 0.15
30 TestBinaryMirror 0.62
31 TestOffline 125.66
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
36 TestAddons/Setup 152.3
38 TestAddons/parallel/Registry 16.31
40 TestAddons/parallel/InspektorGadget 12.36
41 TestAddons/parallel/MetricsServer 6.15
42 TestAddons/parallel/HelmTiller 10.44
44 TestAddons/parallel/CSI 70.68
45 TestAddons/parallel/Headlamp 15.6
46 TestAddons/parallel/CloudSpanner 6.77
47 TestAddons/parallel/LocalPath 53.55
48 TestAddons/parallel/NvidiaDevicePlugin 5.64
49 TestAddons/parallel/Yakd 5.01
52 TestAddons/serial/GCPAuth/Namespaces 0.14
54 TestCertOptions 96.35
55 TestCertExpiration 371.01
57 TestForceSystemdFlag 104.16
58 TestForceSystemdEnv 48.38
60 TestKVMDriverInstallOrUpdate 1.43
64 TestErrorSpam/setup 48.23
65 TestErrorSpam/start 0.41
66 TestErrorSpam/status 0.81
67 TestErrorSpam/pause 1.63
68 TestErrorSpam/unpause 1.81
69 TestErrorSpam/stop 2.29
72 TestFunctional/serial/CopySyncFile 0
73 TestFunctional/serial/StartWithProxy 60.51
74 TestFunctional/serial/AuditLog 0
75 TestFunctional/serial/SoftStart 36.01
76 TestFunctional/serial/KubeContext 0.05
77 TestFunctional/serial/KubectlGetPods 0.08
80 TestFunctional/serial/CacheCmd/cache/add_remote 3.29
81 TestFunctional/serial/CacheCmd/cache/add_local 1.12
82 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
83 TestFunctional/serial/CacheCmd/cache/list 0.07
84 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.24
85 TestFunctional/serial/CacheCmd/cache/cache_reload 1.74
86 TestFunctional/serial/CacheCmd/cache/delete 0.13
87 TestFunctional/serial/MinikubeKubectlCmd 0.14
88 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
89 TestFunctional/serial/ExtraConfig 32.93
90 TestFunctional/serial/ComponentHealth 0.07
91 TestFunctional/serial/LogsCmd 1.58
92 TestFunctional/serial/LogsFileCmd 1.57
93 TestFunctional/serial/InvalidService 4.79
95 TestFunctional/parallel/ConfigCmd 0.47
96 TestFunctional/parallel/DashboardCmd 25.2
97 TestFunctional/parallel/DryRun 0.34
98 TestFunctional/parallel/InternationalLanguage 0.17
99 TestFunctional/parallel/StatusCmd 1.12
103 TestFunctional/parallel/ServiceCmdConnect 9.69
104 TestFunctional/parallel/AddonsCmd 0.16
105 TestFunctional/parallel/PersistentVolumeClaim 46.91
107 TestFunctional/parallel/SSHCmd 0.63
108 TestFunctional/parallel/CpCmd 1.57
109 TestFunctional/parallel/MySQL 28.81
110 TestFunctional/parallel/FileSync 0.27
111 TestFunctional/parallel/CertSync 1.67
115 TestFunctional/parallel/NodeLabels 0.07
117 TestFunctional/parallel/NonActiveRuntimeDisabled 0.52
119 TestFunctional/parallel/License 0.21
120 TestFunctional/parallel/ServiceCmd/DeployApp 12.25
121 TestFunctional/parallel/Version/short 0.07
122 TestFunctional/parallel/Version/components 1.35
123 TestFunctional/parallel/ImageCommands/ImageListShort 0.57
124 TestFunctional/parallel/ImageCommands/ImageListTable 0.38
125 TestFunctional/parallel/ImageCommands/ImageListJson 0.38
126 TestFunctional/parallel/ImageCommands/ImageListYaml 0.65
127 TestFunctional/parallel/ImageCommands/ImageBuild 13.14
128 TestFunctional/parallel/ImageCommands/Setup 1
129 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 5.67
139 TestFunctional/parallel/UpdateContextCmd/no_changes 0.13
140 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.12
141 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.12
142 TestFunctional/parallel/ProfileCmd/profile_not_create 0.41
143 TestFunctional/parallel/ProfileCmd/profile_list 0.46
144 TestFunctional/parallel/ProfileCmd/profile_json_output 0.37
145 TestFunctional/parallel/MountCmd/any-port 10.05
146 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 3
147 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 8.4
148 TestFunctional/parallel/ServiceCmd/List 0.42
149 TestFunctional/parallel/ServiceCmd/JSONOutput 0.61
150 TestFunctional/parallel/ServiceCmd/HTTPS 0.42
151 TestFunctional/parallel/ServiceCmd/Format 0.38
152 TestFunctional/parallel/ServiceCmd/URL 0.42
153 TestFunctional/parallel/MountCmd/specific-port 2.37
154 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.35
155 TestFunctional/parallel/MountCmd/VerifyCleanup 1.73
156 TestFunctional/parallel/ImageCommands/ImageRemove 0.58
157 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.69
158 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.5
159 TestFunctional/delete_addon-resizer_images 0.07
160 TestFunctional/delete_my-image_image 0.02
161 TestFunctional/delete_minikube_cached_images 0.02
165 TestIngressAddonLegacy/StartLegacyK8sCluster 77.21
167 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 13.48
168 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.66
172 TestJSONOutput/start/Command 104.22
173 TestJSONOutput/start/Audit 0
175 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
176 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
178 TestJSONOutput/pause/Command 0.71
179 TestJSONOutput/pause/Audit 0
181 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
182 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
184 TestJSONOutput/unpause/Command 0.69
185 TestJSONOutput/unpause/Audit 0
187 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
188 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
190 TestJSONOutput/stop/Command 7.11
191 TestJSONOutput/stop/Audit 0
193 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
194 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
195 TestErrorJSONOutput 0.24
200 TestMainNoArgs 0.07
201 TestMinikubeProfile 103
204 TestMountStart/serial/StartWithMountFirst 28.64
205 TestMountStart/serial/VerifyMountFirst 0.44
206 TestMountStart/serial/StartWithMountSecond 28.1
207 TestMountStart/serial/VerifyMountSecond 0.42
208 TestMountStart/serial/DeleteFirst 0.92
209 TestMountStart/serial/VerifyMountPostDelete 0.43
210 TestMountStart/serial/Stop 1.2
211 TestMountStart/serial/RestartStopped 25.24
212 TestMountStart/serial/VerifyMountPostStop 0.46
215 TestMultiNode/serial/FreshStart2Nodes 115.1
216 TestMultiNode/serial/DeployApp2Nodes 4.82
218 TestMultiNode/serial/AddNode 42.33
219 TestMultiNode/serial/MultiNodeLabels 0.07
220 TestMultiNode/serial/ProfileList 0.24
221 TestMultiNode/serial/CopyFile 8.26
222 TestMultiNode/serial/StopNode 3.07
223 TestMultiNode/serial/StartAfterStop 29.21
225 TestMultiNode/serial/DeleteNode 1.84
227 TestMultiNode/serial/RestartMultiNode 441.84
228 TestMultiNode/serial/ValidateNameConflict 53.8
235 TestScheduledStopUnix 119.12
239 TestRunningBinaryUpgrade 226.26
241 TestKubernetesUpgrade 217.49
244 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
245 TestNoKubernetes/serial/StartWithK8s 98.23
246 TestNoKubernetes/serial/StartWithStopK8s 41.19
247 TestNoKubernetes/serial/Start 57.65
248 TestNoKubernetes/serial/VerifyK8sNotRunning 0.25
249 TestNoKubernetes/serial/ProfileList 26.44
250 TestNoKubernetes/serial/Stop 1.34
251 TestNoKubernetes/serial/StartNoArgs 22.56
252 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.23
254 TestPause/serial/Start 125.62
269 TestNetworkPlugins/group/false 3.87
273 TestStoppedBinaryUpgrade/Setup 0.58
274 TestStoppedBinaryUpgrade/Upgrade 145.25
275 TestPause/serial/SecondStartNoReconfiguration 140.8
276 TestStoppedBinaryUpgrade/MinikubeLogs 0.97
278 TestStartStop/group/old-k8s-version/serial/FirstStart 218.75
280 TestStartStop/group/no-preload/serial/FirstStart 156.03
282 TestStartStop/group/embed-certs/serial/FirstStart 106.62
283 TestPause/serial/Pause 0.8
284 TestPause/serial/VerifyStatus 0.29
285 TestPause/serial/Unpause 0.81
286 TestPause/serial/PauseAgain 1.27
287 TestPause/serial/DeletePaused 1.12
288 TestPause/serial/VerifyDeletedResources 0.25
290 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 144.93
291 TestStartStop/group/no-preload/serial/DeployApp 10.4
292 TestStartStop/group/embed-certs/serial/DeployApp 8.4
293 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.2
295 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.36
297 TestStartStop/group/old-k8s-version/serial/DeployApp 8.48
298 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.03
300 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.33
301 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.19
305 TestStartStop/group/no-preload/serial/SecondStart 972.83
306 TestStartStop/group/embed-certs/serial/SecondStart 586.83
308 TestStartStop/group/old-k8s-version/serial/SecondStart 705.62
310 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 876.73
318 TestStartStop/group/newest-cni/serial/FirstStart 65.04
320 TestNetworkPlugins/group/auto/Start 108.49
321 TestStartStop/group/newest-cni/serial/DeployApp 0
322 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.8
323 TestStartStop/group/newest-cni/serial/Stop 4.21
324 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.25
325 TestStartStop/group/newest-cni/serial/SecondStart 51.13
326 TestNetworkPlugins/group/kindnet/Start 85.19
327 TestNetworkPlugins/group/calico/Start 112.77
328 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
329 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
330 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.28
331 TestStartStop/group/newest-cni/serial/Pause 3.26
332 TestNetworkPlugins/group/custom-flannel/Start 111.85
333 TestNetworkPlugins/group/auto/KubeletFlags 0.25
334 TestNetworkPlugins/group/auto/NetCatPod 11.26
335 TestNetworkPlugins/group/auto/DNS 0.24
336 TestNetworkPlugins/group/auto/Localhost 0.25
337 TestNetworkPlugins/group/auto/HairPin 0.21
338 TestNetworkPlugins/group/kindnet/ControllerPod 6.02
339 TestNetworkPlugins/group/enable-default-cni/Start 112.44
340 TestNetworkPlugins/group/kindnet/KubeletFlags 0.26
341 TestNetworkPlugins/group/kindnet/NetCatPod 13.29
342 TestNetworkPlugins/group/kindnet/DNS 0.29
343 TestNetworkPlugins/group/kindnet/Localhost 0.24
344 TestNetworkPlugins/group/kindnet/HairPin 0.22
345 TestNetworkPlugins/group/flannel/Start 92.68
346 TestNetworkPlugins/group/calico/ControllerPod 6.01
347 TestNetworkPlugins/group/calico/KubeletFlags 0.24
348 TestNetworkPlugins/group/calico/NetCatPod 12.27
349 TestNetworkPlugins/group/calico/DNS 0.24
350 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.45
351 TestNetworkPlugins/group/calico/Localhost 0.22
352 TestNetworkPlugins/group/calico/HairPin 0.21
353 TestNetworkPlugins/group/custom-flannel/NetCatPod 13.51
354 TestNetworkPlugins/group/custom-flannel/DNS 0.2
355 TestNetworkPlugins/group/custom-flannel/Localhost 0.21
356 TestNetworkPlugins/group/custom-flannel/HairPin 0.22
357 TestNetworkPlugins/group/bridge/Start 105.07
358 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.24
359 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.33
360 TestNetworkPlugins/group/enable-default-cni/DNS 0.17
361 TestNetworkPlugins/group/enable-default-cni/Localhost 0.18
362 TestNetworkPlugins/group/enable-default-cni/HairPin 0.19
363 TestNetworkPlugins/group/flannel/ControllerPod 6.01
364 TestNetworkPlugins/group/flannel/KubeletFlags 0.25
365 TestNetworkPlugins/group/flannel/NetCatPod 12.26
366 TestNetworkPlugins/group/flannel/DNS 0.2
367 TestNetworkPlugins/group/flannel/Localhost 0.16
368 TestNetworkPlugins/group/flannel/HairPin 0.16
369 TestNetworkPlugins/group/bridge/KubeletFlags 0.23
370 TestNetworkPlugins/group/bridge/NetCatPod 13.26
371 TestNetworkPlugins/group/bridge/DNS 0.18
372 TestNetworkPlugins/group/bridge/Localhost 0.16
373 TestNetworkPlugins/group/bridge/HairPin 0.17
x
+
TestDownloadOnly/v1.16.0/json-events (7.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-281930 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-281930 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (7.164427054s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (7.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-281930
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-281930: exit status 85 (80.16904ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-281930 | jenkins | v1.32.0 | 16 Jan 24 02:00 UTC |          |
	|         | -p download-only-281930        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/16 02:00:22
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0116 02:00:22.230440  978494 out.go:296] Setting OutFile to fd 1 ...
	I0116 02:00:22.230658  978494 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 02:00:22.230672  978494 out.go:309] Setting ErrFile to fd 2...
	I0116 02:00:22.230680  978494 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 02:00:22.230932  978494 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17967-971255/.minikube/bin
	W0116 02:00:22.231063  978494 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17967-971255/.minikube/config/config.json: open /home/jenkins/minikube-integration/17967-971255/.minikube/config/config.json: no such file or directory
	I0116 02:00:22.231740  978494 out.go:303] Setting JSON to true
	I0116 02:00:22.232883  978494 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":9772,"bootTime":1705360651,"procs":284,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1048-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0116 02:00:22.232980  978494 start.go:138] virtualization: kvm guest
	I0116 02:00:22.235804  978494 out.go:97] [download-only-281930] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0116 02:00:22.237651  978494 out.go:169] MINIKUBE_LOCATION=17967
	W0116 02:00:22.235976  978494 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/17967-971255/.minikube/cache/preloaded-tarball: no such file or directory
	I0116 02:00:22.236108  978494 notify.go:220] Checking for updates...
	I0116 02:00:22.241256  978494 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0116 02:00:22.243297  978494 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17967-971255/kubeconfig
	I0116 02:00:22.244832  978494 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17967-971255/.minikube
	I0116 02:00:22.247319  978494 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0116 02:00:22.250149  978494 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0116 02:00:22.250445  978494 driver.go:392] Setting default libvirt URI to qemu:///system
	I0116 02:00:22.285394  978494 out.go:97] Using the kvm2 driver based on user configuration
	I0116 02:00:22.285426  978494 start.go:298] selected driver: kvm2
	I0116 02:00:22.285436  978494 start.go:902] validating driver "kvm2" against <nil>
	I0116 02:00:22.285852  978494 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0116 02:00:22.285973  978494 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17967-971255/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0116 02:00:22.302562  978494 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0116 02:00:22.302629  978494 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0116 02:00:22.303173  978494 start_flags.go:392] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0116 02:00:22.303352  978494 start_flags.go:909] Wait components to verify : map[apiserver:true system_pods:true]
	I0116 02:00:22.303386  978494 cni.go:84] Creating CNI manager for ""
	I0116 02:00:22.303399  978494 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 02:00:22.303408  978494 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0116 02:00:22.303416  978494 start_flags.go:321] config:
	{Name:download-only-281930 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-281930 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 02:00:22.303674  978494 iso.go:125] acquiring lock: {Name:mkc0ce0b4b435c5fb7570521b794e5982bb7bf6d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0116 02:00:22.305944  978494 out.go:97] Downloading VM boot image ...
	I0116 02:00:22.305996  978494 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso.sha256 -> /home/jenkins/minikube-integration/17967-971255/.minikube/cache/iso/amd64/minikube-v1.32.1-1703784139-17866-amd64.iso
	I0116 02:00:24.927421  978494 out.go:97] Starting control plane node download-only-281930 in cluster download-only-281930
	I0116 02:00:24.927460  978494 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0116 02:00:24.953014  978494 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I0116 02:00:24.953075  978494 cache.go:56] Caching tarball of preloaded images
	I0116 02:00:24.953306  978494 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0116 02:00:24.955508  978494 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0116 02:00:24.955544  978494 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	I0116 02:00:24.981896  978494 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:432b600409d778ea7a21214e83948570 -> /home/jenkins/minikube-integration/17967-971255/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-281930"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.16.0/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-281930
--- PASS: TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (4.66s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-248523 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-248523 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (4.655933419s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (4.66s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-248523
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-248523: exit status 85 (78.337134ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-281930 | jenkins | v1.32.0 | 16 Jan 24 02:00 UTC |                     |
	|         | -p download-only-281930        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.32.0 | 16 Jan 24 02:00 UTC | 16 Jan 24 02:00 UTC |
	| delete  | -p download-only-281930        | download-only-281930 | jenkins | v1.32.0 | 16 Jan 24 02:00 UTC | 16 Jan 24 02:00 UTC |
	| start   | -o=json --download-only        | download-only-248523 | jenkins | v1.32.0 | 16 Jan 24 02:00 UTC |                     |
	|         | -p download-only-248523        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/16 02:00:29
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0116 02:00:29.776744  978658 out.go:296] Setting OutFile to fd 1 ...
	I0116 02:00:29.776853  978658 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 02:00:29.776865  978658 out.go:309] Setting ErrFile to fd 2...
	I0116 02:00:29.776869  978658 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 02:00:29.777058  978658 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17967-971255/.minikube/bin
	I0116 02:00:29.777674  978658 out.go:303] Setting JSON to true
	I0116 02:00:29.778743  978658 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":9779,"bootTime":1705360651,"procs":280,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1048-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0116 02:00:29.778841  978658 start.go:138] virtualization: kvm guest
	I0116 02:00:29.781433  978658 out.go:97] [download-only-248523] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0116 02:00:29.783025  978658 out.go:169] MINIKUBE_LOCATION=17967
	I0116 02:00:29.781625  978658 notify.go:220] Checking for updates...
	I0116 02:00:29.785710  978658 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0116 02:00:29.787393  978658 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17967-971255/kubeconfig
	I0116 02:00:29.788806  978658 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17967-971255/.minikube
	I0116 02:00:29.790120  978658 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-248523"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.4/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-248523
--- PASS: TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/json-events (3.9s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-423577 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-423577 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (3.895413223s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/json-events (3.90s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/preload-exists
--- PASS: TestDownloadOnly/v1.29.0-rc.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-423577
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-423577: exit status 85 (83.546653ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-281930 | jenkins | v1.32.0 | 16 Jan 24 02:00 UTC |                     |
	|         | -p download-only-281930           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0      |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|         | --driver=kvm2                     |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 16 Jan 24 02:00 UTC | 16 Jan 24 02:00 UTC |
	| delete  | -p download-only-281930           | download-only-281930 | jenkins | v1.32.0 | 16 Jan 24 02:00 UTC | 16 Jan 24 02:00 UTC |
	| start   | -o=json --download-only           | download-only-248523 | jenkins | v1.32.0 | 16 Jan 24 02:00 UTC |                     |
	|         | -p download-only-248523           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4      |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|         | --driver=kvm2                     |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 16 Jan 24 02:00 UTC | 16 Jan 24 02:00 UTC |
	| delete  | -p download-only-248523           | download-only-248523 | jenkins | v1.32.0 | 16 Jan 24 02:00 UTC | 16 Jan 24 02:00 UTC |
	| start   | -o=json --download-only           | download-only-423577 | jenkins | v1.32.0 | 16 Jan 24 02:00 UTC |                     |
	|         | -p download-only-423577           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2 |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|         | --driver=kvm2                     |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/16 02:00:34
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0116 02:00:34.809999  978811 out.go:296] Setting OutFile to fd 1 ...
	I0116 02:00:34.810161  978811 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 02:00:34.810173  978811 out.go:309] Setting ErrFile to fd 2...
	I0116 02:00:34.810177  978811 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 02:00:34.810380  978811 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17967-971255/.minikube/bin
	I0116 02:00:34.811031  978811 out.go:303] Setting JSON to true
	I0116 02:00:34.812102  978811 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":9784,"bootTime":1705360651,"procs":280,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1048-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0116 02:00:34.812171  978811 start.go:138] virtualization: kvm guest
	I0116 02:00:34.815147  978811 out.go:97] [download-only-423577] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0116 02:00:34.815342  978811 notify.go:220] Checking for updates...
	I0116 02:00:34.816924  978811 out.go:169] MINIKUBE_LOCATION=17967
	I0116 02:00:34.818457  978811 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0116 02:00:34.819825  978811 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17967-971255/kubeconfig
	I0116 02:00:34.821195  978811 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17967-971255/.minikube
	I0116 02:00:34.822577  978811 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-423577"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-423577
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestBinaryMirror (0.62s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-558610 --alsologtostderr --binary-mirror http://127.0.0.1:36451 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-558610" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-558610
--- PASS: TestBinaryMirror (0.62s)

                                                
                                    
x
+
TestOffline (125.66s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-182672 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-182672 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (2m4.535755017s)
helpers_test.go:175: Cleaning up "offline-crio-182672" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-182672
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-182672: (1.123623266s)
--- PASS: TestOffline (125.66s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-321835
addons_test.go:928: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-321835: exit status 85 (71.147931ms)

                                                
                                                
-- stdout --
	* Profile "addons-321835" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-321835"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-321835
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-321835: exit status 85 (70.037901ms)

                                                
                                                
-- stdout --
	* Profile "addons-321835" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-321835"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (152.3s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-amd64 start -p addons-321835 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-linux-amd64 start -p addons-321835 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m32.299011501s)
--- PASS: TestAddons/Setup (152.30s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.31s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 30.984168ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-j2d5p" [3dd2c768-f1a1-4679-82a2-ad8ff7e9af26] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.007127985s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-q7nd5" [096ccac2-854d-42c9-b6c0-a77e42588aeb] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.005064517s
addons_test.go:340: (dbg) Run:  kubectl --context addons-321835 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-321835 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-321835 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.18260398s)
addons_test.go:359: (dbg) Run:  out/minikube-linux-amd64 -p addons-321835 ip
2024/01/16 02:03:27 [DEBUG] GET http://192.168.39.11:5000
addons_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p addons-321835 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.31s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (12.36s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-8x9x8" [dcd9bc57-76ba-4713-962a-4ee077406077] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.005881422s
addons_test.go:841: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-321835
addons_test.go:841: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-321835: (6.355558941s)
--- PASS: TestAddons/parallel/InspektorGadget (12.36s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.15s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 31.434878ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-296vl" [ebe68b5a-8342-40f5-9ac6-017909a26e0e] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.005709245s
addons_test.go:415: (dbg) Run:  kubectl --context addons-321835 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-linux-amd64 -p addons-321835 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:432: (dbg) Done: out/minikube-linux-amd64 -p addons-321835 addons disable metrics-server --alsologtostderr -v=1: (1.042495044s)
--- PASS: TestAddons/parallel/MetricsServer (6.15s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (10.44s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:456: tiller-deploy stabilized in 4.351641ms
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-r89d7" [5ca2a0e1-24c1-4197-9fef-2a11b3731eb4] Running
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.008885188s
addons_test.go:473: (dbg) Run:  kubectl --context addons-321835 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-321835 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (4.726817948s)
addons_test.go:490: (dbg) Run:  out/minikube-linux-amd64 -p addons-321835 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (10.44s)

                                                
                                    
x
+
TestAddons/parallel/CSI (70.68s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 32.982307ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-321835 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-321835 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-321835 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-321835 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-321835 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-321835 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-321835 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-321835 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-321835 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-321835 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-321835 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-321835 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-321835 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-321835 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-321835 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-321835 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-321835 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-321835 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-321835 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-321835 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-321835 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-321835 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-321835 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-321835 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-321835 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-321835 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-321835 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [f003bc5c-d361-430c-be26-5998565f2399] Pending
helpers_test.go:344: "task-pv-pod" [f003bc5c-d361-430c-be26-5998565f2399] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [f003bc5c-d361-430c-be26-5998565f2399] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 12.004956201s
addons_test.go:584: (dbg) Run:  kubectl --context addons-321835 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-321835 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-321835 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-321835 delete pod task-pv-pod
addons_test.go:600: (dbg) Run:  kubectl --context addons-321835 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-321835 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-321835 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-321835 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-321835 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-321835 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-321835 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-321835 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-321835 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-321835 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-321835 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-321835 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-321835 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-321835 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-321835 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-321835 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-321835 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-321835 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-321835 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-321835 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [26cd35ee-41ff-4f18-a0ee-61a0c9f23326] Pending
helpers_test.go:344: "task-pv-pod-restore" [26cd35ee-41ff-4f18-a0ee-61a0c9f23326] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [26cd35ee-41ff-4f18-a0ee-61a0c9f23326] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.005014856s
addons_test.go:626: (dbg) Run:  kubectl --context addons-321835 delete pod task-pv-pod-restore
addons_test.go:630: (dbg) Run:  kubectl --context addons-321835 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-321835 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-linux-amd64 -p addons-321835 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-linux-amd64 -p addons-321835 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.870107664s)
addons_test.go:642: (dbg) Run:  out/minikube-linux-amd64 -p addons-321835 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (70.68s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (15.6s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-321835 --alsologtostderr -v=1
addons_test.go:824: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-321835 --alsologtostderr -v=1: (1.590121447s)
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7ddfbb94ff-6q6k8" [0421c569-8afa-4fcf-9eaf-52b494eb32b6] Pending
helpers_test.go:344: "headlamp-7ddfbb94ff-6q6k8" [0421c569-8afa-4fcf-9eaf-52b494eb32b6] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7ddfbb94ff-6q6k8" [0421c569-8afa-4fcf-9eaf-52b494eb32b6] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 14.009075675s
--- PASS: TestAddons/parallel/Headlamp (15.60s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.77s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-64c8c85f65-ddjr9" [e61d4623-2625-4f0b-ad10-8e340519b88d] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.006873112s
addons_test.go:860: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-321835
--- PASS: TestAddons/parallel/CloudSpanner (6.77s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (53.55s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-321835 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-321835 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-321835 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-321835 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-321835 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-321835 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-321835 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-321835 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [092e7ff3-24cc-4c52-b896-e85e30ba41a9] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [092e7ff3-24cc-4c52-b896-e85e30ba41a9] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [092e7ff3-24cc-4c52-b896-e85e30ba41a9] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.005209747s
addons_test.go:891: (dbg) Run:  kubectl --context addons-321835 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-linux-amd64 -p addons-321835 ssh "cat /opt/local-path-provisioner/pvc-4b748b59-8a26-4a5c-b1da-42b4fce585de_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-321835 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-321835 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-linux-amd64 -p addons-321835 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:920: (dbg) Done: out/minikube-linux-amd64 -p addons-321835 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.591807086s)
--- PASS: TestAddons/parallel/LocalPath (53.55s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.64s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-nvq58" [c9de0950-d70c-441e-adb4-e56150f45bb8] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.011519586s
addons_test.go:955: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-321835
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.64s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-9947fc6bf-r5xnq" [4be158dc-651a-4047-9116-213e19d2a128] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.006077855s
--- PASS: TestAddons/parallel/Yakd (5.01s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-321835 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-321835 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                    
x
+
TestCertOptions (96.35s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-959610 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
E0116 02:58:12.495528  978482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/addons-321835/client.crt: no such file or directory
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-959610 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m33.702297369s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-959610 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-959610 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-959610 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-959610" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-959610
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-959610: (2.082548628s)
--- PASS: TestCertOptions (96.35s)

                                                
                                    
x
+
TestCertExpiration (371.01s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-920153 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-920153 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m3.962182174s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-920153 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-920153 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (2m6.145803902s)
helpers_test.go:175: Cleaning up "cert-expiration-920153" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-920153
--- PASS: TestCertExpiration (371.01s)

                                                
                                    
x
+
TestForceSystemdFlag (104.16s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-089420 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-089420 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m42.883493093s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-089420 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-089420" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-089420
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-089420: (1.056930152s)
--- PASS: TestForceSystemdFlag (104.16s)

                                                
                                    
x
+
TestForceSystemdEnv (48.38s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-275613 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-275613 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (47.326373117s)
helpers_test.go:175: Cleaning up "force-systemd-env-275613" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-275613
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-275613: (1.05809373s)
--- PASS: TestForceSystemdEnv (48.38s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (1.43s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (1.43s)

                                                
                                    
x
+
TestErrorSpam/setup (48.23s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-149947 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-149947 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-149947 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-149947 --driver=kvm2  --container-runtime=crio: (48.234540313s)
--- PASS: TestErrorSpam/setup (48.23s)

                                                
                                    
x
+
TestErrorSpam/start (0.41s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-149947 --log_dir /tmp/nospam-149947 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-149947 --log_dir /tmp/nospam-149947 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-149947 --log_dir /tmp/nospam-149947 start --dry-run
--- PASS: TestErrorSpam/start (0.41s)

                                                
                                    
x
+
TestErrorSpam/status (0.81s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-149947 --log_dir /tmp/nospam-149947 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-149947 --log_dir /tmp/nospam-149947 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-149947 --log_dir /tmp/nospam-149947 status
--- PASS: TestErrorSpam/status (0.81s)

                                                
                                    
x
+
TestErrorSpam/pause (1.63s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-149947 --log_dir /tmp/nospam-149947 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-149947 --log_dir /tmp/nospam-149947 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-149947 --log_dir /tmp/nospam-149947 pause
--- PASS: TestErrorSpam/pause (1.63s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.81s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-149947 --log_dir /tmp/nospam-149947 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-149947 --log_dir /tmp/nospam-149947 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-149947 --log_dir /tmp/nospam-149947 unpause
--- PASS: TestErrorSpam/unpause (1.81s)

                                                
                                    
x
+
TestErrorSpam/stop (2.29s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-149947 --log_dir /tmp/nospam-149947 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-149947 --log_dir /tmp/nospam-149947 stop: (2.104775479s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-149947 --log_dir /tmp/nospam-149947 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-149947 --log_dir /tmp/nospam-149947 stop
--- PASS: TestErrorSpam/stop (2.29s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1854: local sync path: /home/jenkins/minikube-integration/17967-971255/.minikube/files/etc/test/nested/copy/978482/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (60.51s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2233: (dbg) Run:  out/minikube-linux-amd64 start -p functional-941139 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
functional_test.go:2233: (dbg) Done: out/minikube-linux-amd64 start -p functional-941139 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m0.513591984s)
--- PASS: TestFunctional/serial/StartWithProxy (60.51s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (36.01s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-941139 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-941139 --alsologtostderr -v=8: (36.004355216s)
functional_test.go:659: soft start took 36.005136682s for "functional-941139" cluster.
--- PASS: TestFunctional/serial/SoftStart (36.01s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-941139 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.29s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-941139 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-941139 cache add registry.k8s.io/pause:3.1: (1.12045248s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-941139 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-941139 cache add registry.k8s.io/pause:3.3: (1.104198545s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-941139 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-941139 cache add registry.k8s.io/pause:latest: (1.067825164s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.29s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-941139 /tmp/TestFunctionalserialCacheCmdcacheadd_local2622608042/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-941139 cache add minikube-local-cache-test:functional-941139
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-941139 cache delete minikube-local-cache-test:functional-941139
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-941139
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.12s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.24s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-941139 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.24s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.74s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-941139 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-941139 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-941139 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (253.099286ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-941139 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-941139 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.74s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-941139 kubectl -- --context functional-941139 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-941139 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (32.93s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-941139 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-941139 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (32.93371558s)
functional_test.go:757: restart took 32.933919067s for "functional-941139" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (32.93s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-941139 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.58s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-941139 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-941139 logs: (1.584059135s)
--- PASS: TestFunctional/serial/LogsCmd (1.58s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.57s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-941139 logs --file /tmp/TestFunctionalserialLogsFileCmd3772228431/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-941139 logs --file /tmp/TestFunctionalserialLogsFileCmd3772228431/001/logs.txt: (1.572756771s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.57s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.79s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2320: (dbg) Run:  kubectl --context functional-941139 apply -f testdata/invalidsvc.yaml
functional_test.go:2334: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-941139
functional_test.go:2334: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-941139: exit status 115 (319.148491ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.199:31139 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2326: (dbg) Run:  kubectl --context functional-941139 delete -f testdata/invalidsvc.yaml
functional_test.go:2326: (dbg) Done: kubectl --context functional-941139 delete -f testdata/invalidsvc.yaml: (1.225866969s)
--- PASS: TestFunctional/serial/InvalidService (4.79s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-941139 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-941139 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-941139 config get cpus: exit status 14 (77.982282ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-941139 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-941139 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-941139 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-941139 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-941139 config get cpus: exit status 14 (67.796324ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (25.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-941139 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-941139 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 986887: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (25.20s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-941139 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-941139 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (172.382921ms)

                                                
                                                
-- stdout --
	* [functional-941139] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17967
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17967-971255/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17967-971255/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0116 02:12:47.932386  986445 out.go:296] Setting OutFile to fd 1 ...
	I0116 02:12:47.932561  986445 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 02:12:47.932575  986445 out.go:309] Setting ErrFile to fd 2...
	I0116 02:12:47.932583  986445 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 02:12:47.932800  986445 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17967-971255/.minikube/bin
	I0116 02:12:47.933499  986445 out.go:303] Setting JSON to false
	I0116 02:12:47.934943  986445 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":10517,"bootTime":1705360651,"procs":244,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1048-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0116 02:12:47.935027  986445 start.go:138] virtualization: kvm guest
	I0116 02:12:47.937861  986445 out.go:177] * [functional-941139] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0116 02:12:47.939318  986445 out.go:177]   - MINIKUBE_LOCATION=17967
	I0116 02:12:47.939327  986445 notify.go:220] Checking for updates...
	I0116 02:12:47.940900  986445 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0116 02:12:47.942317  986445 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17967-971255/kubeconfig
	I0116 02:12:47.943689  986445 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17967-971255/.minikube
	I0116 02:12:47.945024  986445 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0116 02:12:47.946333  986445 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0116 02:12:47.948209  986445 config.go:182] Loaded profile config "functional-941139": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 02:12:47.948911  986445 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 02:12:47.948975  986445 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:12:47.965287  986445 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37239
	I0116 02:12:47.965764  986445 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:12:47.966487  986445 main.go:141] libmachine: Using API Version  1
	I0116 02:12:47.966528  986445 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:12:47.966947  986445 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:12:47.967191  986445 main.go:141] libmachine: (functional-941139) Calling .DriverName
	I0116 02:12:47.967582  986445 driver.go:392] Setting default libvirt URI to qemu:///system
	I0116 02:12:47.968047  986445 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 02:12:47.968098  986445 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:12:47.984705  986445 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33821
	I0116 02:12:47.985177  986445 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:12:47.985796  986445 main.go:141] libmachine: Using API Version  1
	I0116 02:12:47.985854  986445 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:12:47.986217  986445 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:12:47.986480  986445 main.go:141] libmachine: (functional-941139) Calling .DriverName
	I0116 02:12:48.027651  986445 out.go:177] * Using the kvm2 driver based on existing profile
	I0116 02:12:48.029146  986445 start.go:298] selected driver: kvm2
	I0116 02:12:48.029170  986445 start.go:902] validating driver "kvm2" against &{Name:functional-941139 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:functional-941139 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.39.199 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 02:12:48.029333  986445 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0116 02:12:48.032061  986445 out.go:177] 
	W0116 02:12:48.033574  986445 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0116 02:12:48.035159  986445 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-941139 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-941139 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-941139 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (165.357979ms)

                                                
                                                
-- stdout --
	* [functional-941139] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17967
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17967-971255/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17967-971255/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0116 02:12:48.272626  986501 out.go:296] Setting OutFile to fd 1 ...
	I0116 02:12:48.272793  986501 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 02:12:48.272804  986501 out.go:309] Setting ErrFile to fd 2...
	I0116 02:12:48.272811  986501 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 02:12:48.273136  986501 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17967-971255/.minikube/bin
	I0116 02:12:48.273740  986501 out.go:303] Setting JSON to false
	I0116 02:12:48.274888  986501 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":10518,"bootTime":1705360651,"procs":248,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1048-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0116 02:12:48.274971  986501 start.go:138] virtualization: kvm guest
	I0116 02:12:48.277380  986501 out.go:177] * [functional-941139] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	I0116 02:12:48.279069  986501 out.go:177]   - MINIKUBE_LOCATION=17967
	I0116 02:12:48.279116  986501 notify.go:220] Checking for updates...
	I0116 02:12:48.280656  986501 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0116 02:12:48.282322  986501 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17967-971255/kubeconfig
	I0116 02:12:48.283859  986501 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17967-971255/.minikube
	I0116 02:12:48.285542  986501 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0116 02:12:48.287351  986501 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0116 02:12:48.289778  986501 config.go:182] Loaded profile config "functional-941139": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 02:12:48.290466  986501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 02:12:48.290540  986501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:12:48.307319  986501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46413
	I0116 02:12:48.307898  986501 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:12:48.308583  986501 main.go:141] libmachine: Using API Version  1
	I0116 02:12:48.308616  986501 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:12:48.308998  986501 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:12:48.309221  986501 main.go:141] libmachine: (functional-941139) Calling .DriverName
	I0116 02:12:48.309512  986501 driver.go:392] Setting default libvirt URI to qemu:///system
	I0116 02:12:48.310503  986501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 02:12:48.310652  986501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:12:48.328124  986501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43045
	I0116 02:12:48.328596  986501 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:12:48.329143  986501 main.go:141] libmachine: Using API Version  1
	I0116 02:12:48.329176  986501 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:12:48.329540  986501 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:12:48.329791  986501 main.go:141] libmachine: (functional-941139) Calling .DriverName
	I0116 02:12:48.364184  986501 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0116 02:12:48.365530  986501 start.go:298] selected driver: kvm2
	I0116 02:12:48.365546  986501 start.go:902] validating driver "kvm2" against &{Name:functional-941139 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:functional-941139 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.39.199 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 02:12:48.365693  986501 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0116 02:12:48.368143  986501 out.go:177] 
	W0116 02:12:48.369642  986501 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0116 02:12:48.371058  986501 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-941139 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-941139 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-941139 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (9.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1628: (dbg) Run:  kubectl --context functional-941139 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-941139 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-zvlmv" [fe544197-1ea4-4d8c-8533-15cd4e718b6f] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-zvlmv" [fe544197-1ea4-4d8c-8533-15cd4e718b6f] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 9.00575051s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-amd64 -p functional-941139 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.39.199:30282
functional_test.go:1674: http://192.168.39.199:30282: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-zvlmv

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.199:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.199:30282
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (9.69s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-amd64 -p functional-941139 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-amd64 -p functional-941139 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (46.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [16eec926-4f6b-481a-9208-0f33a963a421] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.005277769s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-941139 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-941139 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-941139 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-941139 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-941139 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [466c752d-888f-4efe-a05e-fb05ca00c24f] Pending
helpers_test.go:344: "sp-pod" [466c752d-888f-4efe-a05e-fb05ca00c24f] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [466c752d-888f-4efe-a05e-fb05ca00c24f] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 15.004439413s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-941139 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-941139 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-941139 delete -f testdata/storage-provisioner/pod.yaml: (2.150670973s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-941139 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [bd3fa3a5-62d6-4c2d-aac8-0914d05532fb] Pending
helpers_test.go:344: "sp-pod" [bd3fa3a5-62d6-4c2d-aac8-0914d05532fb] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [bd3fa3a5-62d6-4c2d-aac8-0914d05532fb] Running
E0116 02:13:12.495848  978482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/addons-321835/client.crt: no such file or directory
E0116 02:13:12.501875  978482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/addons-321835/client.crt: no such file or directory
E0116 02:13:12.512199  978482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/addons-321835/client.crt: no such file or directory
E0116 02:13:12.532503  978482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/addons-321835/client.crt: no such file or directory
E0116 02:13:12.572813  978482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/addons-321835/client.crt: no such file or directory
E0116 02:13:12.653166  978482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/addons-321835/client.crt: no such file or directory
E0116 02:13:12.813588  978482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/addons-321835/client.crt: no such file or directory
E0116 02:13:13.133888  978482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/addons-321835/client.crt: no such file or directory
E0116 02:13:13.775124  978482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/addons-321835/client.crt: no such file or directory
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 20.014926439s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-941139 exec sp-pod -- ls /tmp/mount
E0116 02:13:15.055777  978482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/addons-321835/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (46.91s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-amd64 -p functional-941139 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-amd64 -p functional-941139 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-941139 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-941139 ssh -n functional-941139 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-941139 cp functional-941139:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd823722309/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-941139 ssh -n functional-941139 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-941139 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-941139 ssh -n functional-941139 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.57s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (28.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: (dbg) Run:  kubectl --context functional-941139 replace --force -f testdata/mysql.yaml
functional_test.go:1798: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-v2b2m" [40179c97-1077-4671-bd97-a416f2fa85e6] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-v2b2m" [40179c97-1077-4671-bd97-a416f2fa85e6] Running
functional_test.go:1798: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 25.00691345s
functional_test.go:1806: (dbg) Run:  kubectl --context functional-941139 exec mysql-859648c796-v2b2m -- mysql -ppassword -e "show databases;"
functional_test.go:1806: (dbg) Non-zero exit: kubectl --context functional-941139 exec mysql-859648c796-v2b2m -- mysql -ppassword -e "show databases;": exit status 1 (331.03392ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1806: (dbg) Run:  kubectl --context functional-941139 exec mysql-859648c796-v2b2m -- mysql -ppassword -e "show databases;"
2024/01/16 02:13:15 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:1806: (dbg) Non-zero exit: kubectl --context functional-941139 exec mysql-859648c796-v2b2m -- mysql -ppassword -e "show databases;": exit status 1 (537.418002ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1806: (dbg) Run:  kubectl --context functional-941139 exec mysql-859648c796-v2b2m -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (28.81s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1928: Checking for existence of /etc/test/nested/copy/978482/hosts within VM
functional_test.go:1930: (dbg) Run:  out/minikube-linux-amd64 -p functional-941139 ssh "sudo cat /etc/test/nested/copy/978482/hosts"
functional_test.go:1935: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1971: Checking for existence of /etc/ssl/certs/978482.pem within VM
functional_test.go:1972: (dbg) Run:  out/minikube-linux-amd64 -p functional-941139 ssh "sudo cat /etc/ssl/certs/978482.pem"
functional_test.go:1971: Checking for existence of /usr/share/ca-certificates/978482.pem within VM
functional_test.go:1972: (dbg) Run:  out/minikube-linux-amd64 -p functional-941139 ssh "sudo cat /usr/share/ca-certificates/978482.pem"
functional_test.go:1971: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1972: (dbg) Run:  out/minikube-linux-amd64 -p functional-941139 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1998: Checking for existence of /etc/ssl/certs/9784822.pem within VM
functional_test.go:1999: (dbg) Run:  out/minikube-linux-amd64 -p functional-941139 ssh "sudo cat /etc/ssl/certs/9784822.pem"
functional_test.go:1998: Checking for existence of /usr/share/ca-certificates/9784822.pem within VM
functional_test.go:1999: (dbg) Run:  out/minikube-linux-amd64 -p functional-941139 ssh "sudo cat /usr/share/ca-certificates/9784822.pem"
functional_test.go:1998: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1999: (dbg) Run:  out/minikube-linux-amd64 -p functional-941139 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.67s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-941139 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2026: (dbg) Run:  out/minikube-linux-amd64 -p functional-941139 ssh "sudo systemctl is-active docker"
functional_test.go:2026: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-941139 ssh "sudo systemctl is-active docker": exit status 1 (247.003693ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2026: (dbg) Run:  out/minikube-linux-amd64 -p functional-941139 ssh "sudo systemctl is-active containerd"
functional_test.go:2026: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-941139 ssh "sudo systemctl is-active containerd": exit status 1 (269.416827ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2287: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (12.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1438: (dbg) Run:  kubectl --context functional-941139 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-941139 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-f6kdk" [ab2928f2-0281-4789-bde0-b63962259b35] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-f6kdk" [ab2928f2-0281-4789-bde0-b63962259b35] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 12.004894243s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (12.25s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2255: (dbg) Run:  out/minikube-linux-amd64 -p functional-941139 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2269: (dbg) Run:  out/minikube-linux-amd64 -p functional-941139 version -o=json --components
functional_test.go:2269: (dbg) Done: out/minikube-linux-amd64 -p functional-941139 version -o=json --components: (1.353367246s)
--- PASS: TestFunctional/parallel/Version/components (1.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-941139 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-941139 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
localhost/minikube-local-cache-test:functional-941139
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-941139
docker.io/library/nginx:latest
docker.io/kindest/kindnetd:v20230809-80a64d96
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-941139 image ls --format short --alsologtostderr:
I0116 02:12:53.304497  986840 out.go:296] Setting OutFile to fd 1 ...
I0116 02:12:53.304648  986840 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0116 02:12:53.304658  986840 out.go:309] Setting ErrFile to fd 2...
I0116 02:12:53.304663  986840 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0116 02:12:53.304855  986840 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17967-971255/.minikube/bin
I0116 02:12:53.305515  986840 config.go:182] Loaded profile config "functional-941139": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0116 02:12:53.305620  986840 config.go:182] Loaded profile config "functional-941139": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0116 02:12:53.306025  986840 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0116 02:12:53.306092  986840 main.go:141] libmachine: Launching plugin server for driver kvm2
I0116 02:12:53.321744  986840 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33051
I0116 02:12:53.322370  986840 main.go:141] libmachine: () Calling .GetVersion
I0116 02:12:53.323111  986840 main.go:141] libmachine: Using API Version  1
I0116 02:12:53.323143  986840 main.go:141] libmachine: () Calling .SetConfigRaw
I0116 02:12:53.323463  986840 main.go:141] libmachine: () Calling .GetMachineName
I0116 02:12:53.323671  986840 main.go:141] libmachine: (functional-941139) Calling .GetState
I0116 02:12:53.325743  986840 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0116 02:12:53.325823  986840 main.go:141] libmachine: Launching plugin server for driver kvm2
I0116 02:12:53.341823  986840 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43879
I0116 02:12:53.342334  986840 main.go:141] libmachine: () Calling .GetVersion
I0116 02:12:53.342972  986840 main.go:141] libmachine: Using API Version  1
I0116 02:12:53.343008  986840 main.go:141] libmachine: () Calling .SetConfigRaw
I0116 02:12:53.343388  986840 main.go:141] libmachine: () Calling .GetMachineName
I0116 02:12:53.343627  986840 main.go:141] libmachine: (functional-941139) Calling .DriverName
I0116 02:12:53.343907  986840 ssh_runner.go:195] Run: systemctl --version
I0116 02:12:53.343947  986840 main.go:141] libmachine: (functional-941139) Calling .GetSSHHostname
I0116 02:12:53.347245  986840 main.go:141] libmachine: (functional-941139) DBG | domain functional-941139 has defined MAC address 52:54:00:61:41:b1 in network mk-functional-941139
I0116 02:12:53.347770  986840 main.go:141] libmachine: (functional-941139) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:41:b1", ip: ""} in network mk-functional-941139: {Iface:virbr1 ExpiryTime:2024-01-16 03:10:18 +0000 UTC Type:0 Mac:52:54:00:61:41:b1 Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:functional-941139 Clientid:01:52:54:00:61:41:b1}
I0116 02:12:53.347813  986840 main.go:141] libmachine: (functional-941139) DBG | domain functional-941139 has defined IP address 192.168.39.199 and MAC address 52:54:00:61:41:b1 in network mk-functional-941139
I0116 02:12:53.347990  986840 main.go:141] libmachine: (functional-941139) Calling .GetSSHPort
I0116 02:12:53.348206  986840 main.go:141] libmachine: (functional-941139) Calling .GetSSHKeyPath
I0116 02:12:53.348397  986840 main.go:141] libmachine: (functional-941139) Calling .GetSSHUsername
I0116 02:12:53.348558  986840 sshutil.go:53] new ssh client: &{IP:192.168.39.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/functional-941139/id_rsa Username:docker}
I0116 02:12:53.509064  986840 ssh_runner.go:195] Run: sudo crictl images --output json
I0116 02:12:53.792624  986840 main.go:141] libmachine: Making call to close driver server
I0116 02:12:53.792643  986840 main.go:141] libmachine: (functional-941139) Calling .Close
I0116 02:12:53.793009  986840 main.go:141] libmachine: Successfully made call to close driver server
I0116 02:12:53.793029  986840 main.go:141] libmachine: Making call to close connection to plugin binary
I0116 02:12:53.793043  986840 main.go:141] libmachine: Making call to close driver server
I0116 02:12:53.793049  986840 main.go:141] libmachine: (functional-941139) DBG | Closing plugin on server side
I0116 02:12:53.793051  986840 main.go:141] libmachine: (functional-941139) Calling .Close
I0116 02:12:53.793409  986840 main.go:141] libmachine: (functional-941139) DBG | Closing plugin on server side
I0116 02:12:53.793426  986840 main.go:141] libmachine: Successfully made call to close driver server
I0116 02:12:53.793444  986840 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-941139 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-941139 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| localhost/minikube-local-cache-test     | functional-941139  | b09dce6d4d553 | 3.34kB |
| registry.k8s.io/kube-proxy              | v1.28.4            | 83f6cc407eed8 | 74.7MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| docker.io/library/nginx                 | latest             | a8758716bb6aa | 191MB  |
| registry.k8s.io/etcd                    | 3.5.9-0            | 73deb9a3f7025 | 295MB  |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
| gcr.io/google-containers/addon-resizer  | functional-941139  | ffd4cfbbe753e | 34.1MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/coredns/coredns         | v1.10.1            | ead0a4a53df89 | 53.6MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/kube-scheduler          | v1.28.4            | e3db313c6dbc0 | 61.6MB |
| docker.io/kindest/kindnetd              | v20230809-80a64d96 | c7d1297425461 | 65.3MB |
| gcr.io/k8s-minikube/busybox             | latest             | beae173ccac6a | 1.46MB |
| localhost/my-image                      | functional-941139  | 6182c46f92828 | 1.47MB |
| registry.k8s.io/kube-apiserver          | v1.28.4            | 7fe0e6f37db33 | 127MB  |
| registry.k8s.io/kube-controller-manager | v1.28.4            | d058aa5ab969c | 123MB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-941139 image ls --format table --alsologtostderr:
I0116 02:13:08.042637  987036 out.go:296] Setting OutFile to fd 1 ...
I0116 02:13:08.042793  987036 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0116 02:13:08.042808  987036 out.go:309] Setting ErrFile to fd 2...
I0116 02:13:08.042816  987036 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0116 02:13:08.043031  987036 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17967-971255/.minikube/bin
I0116 02:13:08.043654  987036 config.go:182] Loaded profile config "functional-941139": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0116 02:13:08.043748  987036 config.go:182] Loaded profile config "functional-941139": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0116 02:13:08.044213  987036 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0116 02:13:08.044266  987036 main.go:141] libmachine: Launching plugin server for driver kvm2
I0116 02:13:08.059758  987036 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38421
I0116 02:13:08.060266  987036 main.go:141] libmachine: () Calling .GetVersion
I0116 02:13:08.060842  987036 main.go:141] libmachine: Using API Version  1
I0116 02:13:08.060881  987036 main.go:141] libmachine: () Calling .SetConfigRaw
I0116 02:13:08.061319  987036 main.go:141] libmachine: () Calling .GetMachineName
I0116 02:13:08.061581  987036 main.go:141] libmachine: (functional-941139) Calling .GetState
I0116 02:13:08.063497  987036 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0116 02:13:08.063549  987036 main.go:141] libmachine: Launching plugin server for driver kvm2
I0116 02:13:08.078527  987036 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44113
I0116 02:13:08.079024  987036 main.go:141] libmachine: () Calling .GetVersion
I0116 02:13:08.079603  987036 main.go:141] libmachine: Using API Version  1
I0116 02:13:08.079641  987036 main.go:141] libmachine: () Calling .SetConfigRaw
I0116 02:13:08.079970  987036 main.go:141] libmachine: () Calling .GetMachineName
I0116 02:13:08.080178  987036 main.go:141] libmachine: (functional-941139) Calling .DriverName
I0116 02:13:08.080394  987036 ssh_runner.go:195] Run: systemctl --version
I0116 02:13:08.080429  987036 main.go:141] libmachine: (functional-941139) Calling .GetSSHHostname
I0116 02:13:08.083003  987036 main.go:141] libmachine: (functional-941139) DBG | domain functional-941139 has defined MAC address 52:54:00:61:41:b1 in network mk-functional-941139
I0116 02:13:08.083452  987036 main.go:141] libmachine: (functional-941139) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:41:b1", ip: ""} in network mk-functional-941139: {Iface:virbr1 ExpiryTime:2024-01-16 03:10:18 +0000 UTC Type:0 Mac:52:54:00:61:41:b1 Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:functional-941139 Clientid:01:52:54:00:61:41:b1}
I0116 02:13:08.083486  987036 main.go:141] libmachine: (functional-941139) DBG | domain functional-941139 has defined IP address 192.168.39.199 and MAC address 52:54:00:61:41:b1 in network mk-functional-941139
I0116 02:13:08.083631  987036 main.go:141] libmachine: (functional-941139) Calling .GetSSHPort
I0116 02:13:08.083803  987036 main.go:141] libmachine: (functional-941139) Calling .GetSSHKeyPath
I0116 02:13:08.083959  987036 main.go:141] libmachine: (functional-941139) Calling .GetSSHUsername
I0116 02:13:08.084077  987036 sshutil.go:53] new ssh client: &{IP:192.168.39.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/functional-941139/id_rsa Username:docker}
I0116 02:13:08.212573  987036 ssh_runner.go:195] Run: sudo crictl images --output json
I0116 02:13:08.333322  987036 main.go:141] libmachine: Making call to close driver server
I0116 02:13:08.333352  987036 main.go:141] libmachine: (functional-941139) Calling .Close
I0116 02:13:08.333674  987036 main.go:141] libmachine: Successfully made call to close driver server
I0116 02:13:08.333693  987036 main.go:141] libmachine: Making call to close connection to plugin binary
I0116 02:13:08.333712  987036 main.go:141] libmachine: Making call to close driver server
I0116 02:13:08.333726  987036 main.go:141] libmachine: (functional-941139) Calling .Close
I0116 02:13:08.333990  987036 main.go:141] libmachine: (functional-941139) DBG | Closing plugin on server side
I0116 02:13:08.334019  987036 main.go:141] libmachine: Successfully made call to close driver server
I0116 02:13:08.334049  987036 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-941139 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-941139 image ls --format json --alsologtostderr:
[{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3
e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","repoDigests":["registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15","registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"295456551"},{"id":"d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c","registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.4"],"size":"123261750"},{"id":"e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1","repoDigests":["registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f
9fc0eedc6f34e28ba","registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32"],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.4"],"size":"61551410"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"b6d887078411bc98e81246d483f5148879425f5eb4345d56e67e2afdfe75bfd1","repoDigests":["docker.io/library/624e7c85b165641ad8bc2340b6b7148cd9f62e66dfbe5ef1dd573a9e675dc6c7-tmp@sha256:ed5701b12846192cea120b0454e90a528ea6ae91492c019fc8df8245dd05413c"],"repoTags":[],"size":"1466017"},{"id":"a8758716bb6aa4d90071160d27028fe4eaee7ce8166221a97d30440c8eac2be6","repoDigests":["docker.io/library/nginx@sha256:161ef4b1bf7effb350a2a9625cb2b59f69d54ec6059a8a155a1438d0439c593c","docker.io/library/nginx@sha256:4c0fdaa8b6341bfdeca5f18f7837462c80cff90527ee35ef185571e1c327beac"],"repoTags":["doc
ker.io/library/nginx:latest"],"size":"190867606"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":["gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126"],"repoTags":["gcr.io/google-containers/addon-resizer:functional-941139"],"size":"34114467"},{"id":"6182c46f92828bc8dcd65edfe0a7e656e7434afbc98ed8c5c563c2ffffb03ad6","repoDigests":["localhost/my-image@sha256:15f156b4fc7fc6e57dff3f99484e21126f5dbf5bb43b31efa1ce0a0ac4b557a1"],"repoTags":["localhost/my-image:functional-941139"],"size":"1468600"},{"id":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e","registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"53621675"},{"id":"7fe0e6f37db33464725e616a12ccc4e36970370005a2b09
683a974db6350c257","repoDigests":["registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499","registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.4"],"size":"127226832"},{"id":"b09dce6d4d553eaa3c9048ca5adf8447c96a45340ecf2b76b768405d19a9d28c","repoDigests":["localhost/minikube-local-cache-test@sha256:d0f0e0f7a17942c9f654b89f631b473fee3c7cfff0a00d1e87f196985707efae"],"repoTags":["localhost/minikube-local-cache-test:functional-941139"],"size":"3343"},{"id":"83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e","repoDigests":["registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e","registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"],"repoTags":["registry.k8s.io/kube-proxy:v1.28.4"],"size":"74749335"},{"id":"c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768
d6c8c18cc","repoDigests":["docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052","docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"],"repoTags":["docker.io/kindest/kindnetd:v20230809-80a64d96"],"size":"65258016"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@s
ha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097","registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"750414"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-941139 image ls --format json --alsologtostderr:
I0116 02:13:07.659230  987012 out.go:296] Setting OutFile to fd 1 ...
I0116 02:13:07.659398  987012 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0116 02:13:07.659410  987012 out.go:309] Setting ErrFile to fd 2...
I0116 02:13:07.659419  987012 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0116 02:13:07.659663  987012 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17967-971255/.minikube/bin
I0116 02:13:07.660379  987012 config.go:182] Loaded profile config "functional-941139": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0116 02:13:07.660517  987012 config.go:182] Loaded profile config "functional-941139": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0116 02:13:07.661025  987012 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0116 02:13:07.661093  987012 main.go:141] libmachine: Launching plugin server for driver kvm2
I0116 02:13:07.676874  987012 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45281
I0116 02:13:07.677512  987012 main.go:141] libmachine: () Calling .GetVersion
I0116 02:13:07.678305  987012 main.go:141] libmachine: Using API Version  1
I0116 02:13:07.678341  987012 main.go:141] libmachine: () Calling .SetConfigRaw
I0116 02:13:07.678751  987012 main.go:141] libmachine: () Calling .GetMachineName
I0116 02:13:07.679031  987012 main.go:141] libmachine: (functional-941139) Calling .GetState
I0116 02:13:07.681111  987012 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0116 02:13:07.681159  987012 main.go:141] libmachine: Launching plugin server for driver kvm2
I0116 02:13:07.696424  987012 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40943
I0116 02:13:07.697064  987012 main.go:141] libmachine: () Calling .GetVersion
I0116 02:13:07.697662  987012 main.go:141] libmachine: Using API Version  1
I0116 02:13:07.697688  987012 main.go:141] libmachine: () Calling .SetConfigRaw
I0116 02:13:07.698120  987012 main.go:141] libmachine: () Calling .GetMachineName
I0116 02:13:07.698379  987012 main.go:141] libmachine: (functional-941139) Calling .DriverName
I0116 02:13:07.698666  987012 ssh_runner.go:195] Run: systemctl --version
I0116 02:13:07.698713  987012 main.go:141] libmachine: (functional-941139) Calling .GetSSHHostname
I0116 02:13:07.701829  987012 main.go:141] libmachine: (functional-941139) DBG | domain functional-941139 has defined MAC address 52:54:00:61:41:b1 in network mk-functional-941139
I0116 02:13:07.702254  987012 main.go:141] libmachine: (functional-941139) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:41:b1", ip: ""} in network mk-functional-941139: {Iface:virbr1 ExpiryTime:2024-01-16 03:10:18 +0000 UTC Type:0 Mac:52:54:00:61:41:b1 Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:functional-941139 Clientid:01:52:54:00:61:41:b1}
I0116 02:13:07.702286  987012 main.go:141] libmachine: (functional-941139) DBG | domain functional-941139 has defined IP address 192.168.39.199 and MAC address 52:54:00:61:41:b1 in network mk-functional-941139
I0116 02:13:07.702451  987012 main.go:141] libmachine: (functional-941139) Calling .GetSSHPort
I0116 02:13:07.702656  987012 main.go:141] libmachine: (functional-941139) Calling .GetSSHKeyPath
I0116 02:13:07.702876  987012 main.go:141] libmachine: (functional-941139) Calling .GetSSHUsername
I0116 02:13:07.703043  987012 sshutil.go:53] new ssh client: &{IP:192.168.39.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/functional-941139/id_rsa Username:docker}
I0116 02:13:07.867520  987012 ssh_runner.go:195] Run: sudo crictl images --output json
I0116 02:13:07.960250  987012 main.go:141] libmachine: Making call to close driver server
I0116 02:13:07.960269  987012 main.go:141] libmachine: (functional-941139) Calling .Close
I0116 02:13:07.960599  987012 main.go:141] libmachine: Successfully made call to close driver server
I0116 02:13:07.960617  987012 main.go:141] libmachine: Making call to close connection to plugin binary
I0116 02:13:07.960628  987012 main.go:141] libmachine: (functional-941139) DBG | Closing plugin on server side
I0116 02:13:07.960632  987012 main.go:141] libmachine: Making call to close driver server
I0116 02:13:07.960641  987012 main.go:141] libmachine: (functional-941139) Calling .Close
I0116 02:13:07.960924  987012 main.go:141] libmachine: Successfully made call to close driver server
I0116 02:13:07.960964  987012 main.go:141] libmachine: (functional-941139) DBG | Closing plugin on server side
I0116 02:13:07.960974  987012 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-941139 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-941139 image ls --format yaml --alsologtostderr:
- id: b09dce6d4d553eaa3c9048ca5adf8447c96a45340ecf2b76b768405d19a9d28c
repoDigests:
- localhost/minikube-local-cache-test@sha256:d0f0e0f7a17942c9f654b89f631b473fee3c7cfff0a00d1e87f196985707efae
repoTags:
- localhost/minikube-local-cache-test:functional-941139
size: "3343"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e
repoDigests:
- registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e
- registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532
repoTags:
- registry.k8s.io/kube-proxy:v1.28.4
size: "74749335"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests:
- registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15
- registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "295456551"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"
- id: c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc
repoDigests:
- docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052
- docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4
repoTags:
- docker.io/kindest/kindnetd:v20230809-80a64d96
size: "65258016"
- id: a8758716bb6aa4d90071160d27028fe4eaee7ce8166221a97d30440c8eac2be6
repoDigests:
- docker.io/library/nginx@sha256:161ef4b1bf7effb350a2a9625cb2b59f69d54ec6059a8a155a1438d0439c593c
- docker.io/library/nginx@sha256:4c0fdaa8b6341bfdeca5f18f7837462c80cff90527ee35ef185571e1c327beac
repoTags:
- docker.io/library/nginx:latest
size: "190867606"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-941139
size: "34114467"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
- registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53621675"
- id: 7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499
- registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.4
size: "127226832"
- id: d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c
- registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.4
size: "123261750"
- id: e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba
- registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.4
size: "61551410"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-941139 image ls --format yaml --alsologtostderr:
I0116 02:12:53.872823  986863 out.go:296] Setting OutFile to fd 1 ...
I0116 02:12:53.873019  986863 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0116 02:12:53.873032  986863 out.go:309] Setting ErrFile to fd 2...
I0116 02:12:53.873039  986863 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0116 02:12:53.873275  986863 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17967-971255/.minikube/bin
I0116 02:12:53.874006  986863 config.go:182] Loaded profile config "functional-941139": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0116 02:12:53.874163  986863 config.go:182] Loaded profile config "functional-941139": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0116 02:12:53.874604  986863 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0116 02:12:53.874684  986863 main.go:141] libmachine: Launching plugin server for driver kvm2
I0116 02:12:53.890788  986863 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41203
I0116 02:12:53.891263  986863 main.go:141] libmachine: () Calling .GetVersion
I0116 02:12:53.891987  986863 main.go:141] libmachine: Using API Version  1
I0116 02:12:53.892019  986863 main.go:141] libmachine: () Calling .SetConfigRaw
I0116 02:12:53.892428  986863 main.go:141] libmachine: () Calling .GetMachineName
I0116 02:12:53.892666  986863 main.go:141] libmachine: (functional-941139) Calling .GetState
I0116 02:12:53.894650  986863 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0116 02:12:53.894693  986863 main.go:141] libmachine: Launching plugin server for driver kvm2
I0116 02:12:53.910828  986863 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36795
I0116 02:12:53.911269  986863 main.go:141] libmachine: () Calling .GetVersion
I0116 02:12:53.911911  986863 main.go:141] libmachine: Using API Version  1
I0116 02:12:53.911957  986863 main.go:141] libmachine: () Calling .SetConfigRaw
I0116 02:12:53.912323  986863 main.go:141] libmachine: () Calling .GetMachineName
I0116 02:12:53.912534  986863 main.go:141] libmachine: (functional-941139) Calling .DriverName
I0116 02:12:53.912822  986863 ssh_runner.go:195] Run: systemctl --version
I0116 02:12:53.912865  986863 main.go:141] libmachine: (functional-941139) Calling .GetSSHHostname
I0116 02:12:53.916068  986863 main.go:141] libmachine: (functional-941139) DBG | domain functional-941139 has defined MAC address 52:54:00:61:41:b1 in network mk-functional-941139
I0116 02:12:53.916482  986863 main.go:141] libmachine: (functional-941139) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:41:b1", ip: ""} in network mk-functional-941139: {Iface:virbr1 ExpiryTime:2024-01-16 03:10:18 +0000 UTC Type:0 Mac:52:54:00:61:41:b1 Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:functional-941139 Clientid:01:52:54:00:61:41:b1}
I0116 02:12:53.916522  986863 main.go:141] libmachine: (functional-941139) DBG | domain functional-941139 has defined IP address 192.168.39.199 and MAC address 52:54:00:61:41:b1 in network mk-functional-941139
I0116 02:12:53.916651  986863 main.go:141] libmachine: (functional-941139) Calling .GetSSHPort
I0116 02:12:53.916863  986863 main.go:141] libmachine: (functional-941139) Calling .GetSSHKeyPath
I0116 02:12:53.917039  986863 main.go:141] libmachine: (functional-941139) Calling .GetSSHUsername
I0116 02:12:53.917217  986863 sshutil.go:53] new ssh client: &{IP:192.168.39.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/functional-941139/id_rsa Username:docker}
I0116 02:12:54.090821  986863 ssh_runner.go:195] Run: sudo crictl images --output json
I0116 02:12:54.443507  986863 main.go:141] libmachine: Making call to close driver server
I0116 02:12:54.443526  986863 main.go:141] libmachine: (functional-941139) Calling .Close
I0116 02:12:54.443848  986863 main.go:141] libmachine: Successfully made call to close driver server
I0116 02:12:54.443871  986863 main.go:141] libmachine: Making call to close connection to plugin binary
I0116 02:12:54.443889  986863 main.go:141] libmachine: Making call to close driver server
I0116 02:12:54.443899  986863 main.go:141] libmachine: (functional-941139) Calling .Close
I0116 02:12:54.444177  986863 main.go:141] libmachine: Successfully made call to close driver server
I0116 02:12:54.444196  986863 main.go:141] libmachine: Making call to close connection to plugin binary
I0116 02:12:54.444247  986863 main.go:141] libmachine: (functional-941139) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (13.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-941139 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-941139 ssh pgrep buildkitd: exit status 1 (353.224178ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-941139 image build -t localhost/my-image:functional-941139 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-941139 image build -t localhost/my-image:functional-941139 testdata/build --alsologtostderr: (12.438122147s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-941139 image build -t localhost/my-image:functional-941139 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> b6d88707841
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-941139
--> 6182c46f928
Successfully tagged localhost/my-image:functional-941139
6182c46f92828bc8dcd65edfe0a7e656e7434afbc98ed8c5c563c2ffffb03ad6
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-941139 image build -t localhost/my-image:functional-941139 testdata/build --alsologtostderr:
I0116 02:12:54.872273  986944 out.go:296] Setting OutFile to fd 1 ...
I0116 02:12:54.872539  986944 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0116 02:12:54.872549  986944 out.go:309] Setting ErrFile to fd 2...
I0116 02:12:54.872554  986944 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0116 02:12:54.872728  986944 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17967-971255/.minikube/bin
I0116 02:12:54.873324  986944 config.go:182] Loaded profile config "functional-941139": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0116 02:12:54.873922  986944 config.go:182] Loaded profile config "functional-941139": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0116 02:12:54.874381  986944 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0116 02:12:54.874442  986944 main.go:141] libmachine: Launching plugin server for driver kvm2
I0116 02:12:54.890333  986944 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38539
I0116 02:12:54.890925  986944 main.go:141] libmachine: () Calling .GetVersion
I0116 02:12:54.891509  986944 main.go:141] libmachine: Using API Version  1
I0116 02:12:54.891535  986944 main.go:141] libmachine: () Calling .SetConfigRaw
I0116 02:12:54.891943  986944 main.go:141] libmachine: () Calling .GetMachineName
I0116 02:12:54.892205  986944 main.go:141] libmachine: (functional-941139) Calling .GetState
I0116 02:12:54.894489  986944 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0116 02:12:54.894549  986944 main.go:141] libmachine: Launching plugin server for driver kvm2
I0116 02:12:54.910783  986944 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33809
I0116 02:12:54.911238  986944 main.go:141] libmachine: () Calling .GetVersion
I0116 02:12:54.911978  986944 main.go:141] libmachine: Using API Version  1
I0116 02:12:54.912028  986944 main.go:141] libmachine: () Calling .SetConfigRaw
I0116 02:12:54.912460  986944 main.go:141] libmachine: () Calling .GetMachineName
I0116 02:12:54.912674  986944 main.go:141] libmachine: (functional-941139) Calling .DriverName
I0116 02:12:54.912987  986944 ssh_runner.go:195] Run: systemctl --version
I0116 02:12:54.913017  986944 main.go:141] libmachine: (functional-941139) Calling .GetSSHHostname
I0116 02:12:54.916686  986944 main.go:141] libmachine: (functional-941139) DBG | domain functional-941139 has defined MAC address 52:54:00:61:41:b1 in network mk-functional-941139
I0116 02:12:54.917203  986944 main.go:141] libmachine: (functional-941139) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:41:b1", ip: ""} in network mk-functional-941139: {Iface:virbr1 ExpiryTime:2024-01-16 03:10:18 +0000 UTC Type:0 Mac:52:54:00:61:41:b1 Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:functional-941139 Clientid:01:52:54:00:61:41:b1}
I0116 02:12:54.917234  986944 main.go:141] libmachine: (functional-941139) DBG | domain functional-941139 has defined IP address 192.168.39.199 and MAC address 52:54:00:61:41:b1 in network mk-functional-941139
I0116 02:12:54.917446  986944 main.go:141] libmachine: (functional-941139) Calling .GetSSHPort
I0116 02:12:54.917663  986944 main.go:141] libmachine: (functional-941139) Calling .GetSSHKeyPath
I0116 02:12:54.917843  986944 main.go:141] libmachine: (functional-941139) Calling .GetSSHUsername
I0116 02:12:54.918043  986944 sshutil.go:53] new ssh client: &{IP:192.168.39.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/functional-941139/id_rsa Username:docker}
I0116 02:12:55.060163  986944 build_images.go:151] Building image from path: /tmp/build.4231347536.tar
I0116 02:12:55.060291  986944 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0116 02:12:55.120547  986944 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.4231347536.tar
I0116 02:12:55.131931  986944 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.4231347536.tar: stat -c "%s %y" /var/lib/minikube/build/build.4231347536.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.4231347536.tar': No such file or directory
I0116 02:12:55.131970  986944 ssh_runner.go:362] scp /tmp/build.4231347536.tar --> /var/lib/minikube/build/build.4231347536.tar (3072 bytes)
I0116 02:12:55.440758  986944 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.4231347536
I0116 02:12:55.471889  986944 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.4231347536 -xf /var/lib/minikube/build/build.4231347536.tar
I0116 02:12:55.498846  986944 crio.go:297] Building image: /var/lib/minikube/build/build.4231347536
I0116 02:12:55.498944  986944 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-941139 /var/lib/minikube/build/build.4231347536 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0116 02:13:07.196322  986944 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-941139 /var/lib/minikube/build/build.4231347536 --cgroup-manager=cgroupfs: (11.697344841s)
I0116 02:13:07.196409  986944 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.4231347536
I0116 02:13:07.209400  986944 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.4231347536.tar
I0116 02:13:07.233974  986944 build_images.go:207] Built localhost/my-image:functional-941139 from /tmp/build.4231347536.tar
I0116 02:13:07.234023  986944 build_images.go:123] succeeded building to: functional-941139
I0116 02:13:07.234030  986944 build_images.go:124] failed building to: 
I0116 02:13:07.234067  986944 main.go:141] libmachine: Making call to close driver server
I0116 02:13:07.234085  986944 main.go:141] libmachine: (functional-941139) Calling .Close
I0116 02:13:07.234403  986944 main.go:141] libmachine: Successfully made call to close driver server
I0116 02:13:07.234425  986944 main.go:141] libmachine: Making call to close connection to plugin binary
I0116 02:13:07.234442  986944 main.go:141] libmachine: Making call to close driver server
I0116 02:13:07.234451  986944 main.go:141] libmachine: (functional-941139) DBG | Closing plugin on server side
I0116 02:13:07.234454  986944 main.go:141] libmachine: (functional-941139) Calling .Close
I0116 02:13:07.235007  986944 main.go:141] libmachine: (functional-941139) DBG | Closing plugin on server side
I0116 02:13:07.235080  986944 main.go:141] libmachine: Successfully made call to close driver server
I0116 02:13:07.235094  986944 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-941139 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (13.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-941139
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-941139 image load --daemon gcr.io/google-containers/addon-resizer:functional-941139 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-941139 image load --daemon gcr.io/google-containers/addon-resizer:functional-941139 --alsologtostderr: (5.426849102s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-941139 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.67s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2118: (dbg) Run:  out/minikube-linux-amd64 -p functional-941139 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2118: (dbg) Run:  out/minikube-linux-amd64 -p functional-941139 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2118: (dbg) Run:  out/minikube-linux-amd64 -p functional-941139 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1314: Took "382.560358ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1328: Took "74.655617ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1365: Took "305.08984ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1378: Took "67.149496ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (10.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-941139 /tmp/TestFunctionalparallelMountCmdany-port2499102701/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1705371153717389571" to /tmp/TestFunctionalparallelMountCmdany-port2499102701/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1705371153717389571" to /tmp/TestFunctionalparallelMountCmdany-port2499102701/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1705371153717389571" to /tmp/TestFunctionalparallelMountCmdany-port2499102701/001/test-1705371153717389571
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-941139 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-941139 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (301.770962ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-941139 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-941139 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jan 16 02:12 created-by-test
-rw-r--r-- 1 docker docker 24 Jan 16 02:12 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jan 16 02:12 test-1705371153717389571
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-941139 ssh cat /mount-9p/test-1705371153717389571
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-941139 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [94c333e0-9f0c-413a-92a1-c1f8e56f5689] Pending
helpers_test.go:344: "busybox-mount" [94c333e0-9f0c-413a-92a1-c1f8e56f5689] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [94c333e0-9f0c-413a-92a1-c1f8e56f5689] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [94c333e0-9f0c-413a-92a1-c1f8e56f5689] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 7.005701193s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-941139 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-941139 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-941139 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-941139 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-941139 /tmp/TestFunctionalparallelMountCmdany-port2499102701/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (10.05s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-941139 image load --daemon gcr.io/google-containers/addon-resizer:functional-941139 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-941139 image load --daemon gcr.io/google-containers/addon-resizer:functional-941139 --alsologtostderr: (2.510790754s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-941139 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (8.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-941139
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-941139 image load --daemon gcr.io/google-containers/addon-resizer:functional-941139 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-941139 image load --daemon gcr.io/google-containers/addon-resizer:functional-941139 --alsologtostderr: (7.108407496s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-941139 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (8.40s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-amd64 -p functional-941139 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-amd64 -p functional-941139 service list -o json
functional_test.go:1493: Took "612.85681ms" to run "out/minikube-linux-amd64 -p functional-941139 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-amd64 -p functional-941139 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.39.199:30357
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-amd64 -p functional-941139 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-amd64 -p functional-941139 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.39.199:30357
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-941139 /tmp/TestFunctionalparallelMountCmdspecific-port977668256/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-941139 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-941139 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (334.575265ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-941139 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-941139 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-941139 /tmp/TestFunctionalparallelMountCmdspecific-port977668256/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-941139 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-941139 ssh "sudo umount -f /mount-9p": exit status 1 (232.061764ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-941139 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-941139 /tmp/TestFunctionalparallelMountCmdspecific-port977668256/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-941139 image save gcr.io/google-containers/addon-resizer:functional-941139 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-941139 image save gcr.io/google-containers/addon-resizer:functional-941139 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (1.345557781s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.35s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-941139 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1576967820/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-941139 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1576967820/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-941139 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1576967820/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-941139 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-941139 ssh "findmnt -T" /mount1: exit status 1 (349.6671ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-941139 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-941139 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-941139 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-941139 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-941139 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1576967820/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-941139 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1576967820/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-941139 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1576967820/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.73s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-941139 image rm gcr.io/google-containers/addon-resizer:functional-941139 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-941139 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-941139 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-941139 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (1.376686537s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-941139 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.69s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-941139
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-941139 image save --daemon gcr.io/google-containers/addon-resizer:functional-941139 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-941139 image save --daemon gcr.io/google-containers/addon-resizer:functional-941139 --alsologtostderr: (1.460751024s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-941139
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.50s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-941139
--- PASS: TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-941139
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-941139
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (77.21s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-473102 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
E0116 02:13:22.737550  978482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/addons-321835/client.crt: no such file or directory
E0116 02:13:32.978525  978482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/addons-321835/client.crt: no such file or directory
E0116 02:13:53.458991  978482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/addons-321835/client.crt: no such file or directory
E0116 02:14:34.419865  978482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/addons-321835/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-473102 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m17.211263936s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (77.21s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (13.48s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-473102 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-473102 addons enable ingress --alsologtostderr -v=5: (13.476955449s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (13.48s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.66s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-473102 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.66s)

                                                
                                    
x
+
TestJSONOutput/start/Command (104.22s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-591739 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
E0116 02:17:47.993702  978482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/functional-941139/client.crt: no such file or directory
E0116 02:18:08.474627  978482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/functional-941139/client.crt: no such file or directory
E0116 02:18:12.495434  978482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/addons-321835/client.crt: no such file or directory
E0116 02:18:40.183338  978482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/addons-321835/client.crt: no such file or directory
E0116 02:18:49.435224  978482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/functional-941139/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-591739 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (1m44.22019784s)
--- PASS: TestJSONOutput/start/Command (104.22s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.71s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-591739 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.71s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.69s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-591739 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.69s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.11s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-591739 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-591739 --output=json --user=testUser: (7.114286657s)
--- PASS: TestJSONOutput/stop/Command (7.11s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.24s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-829753 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-829753 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (84.883126ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"c9c0d586-d409-47cc-a0d4-6031981f7668","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-829753] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"b773f907-0eff-4929-be55-69637d8750db","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17967"}}
	{"specversion":"1.0","id":"d7b75784-fb22-4e4f-a0eb-656be439eb23","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"e5d1752f-e751-4b8e-a182-242f800de1be","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17967-971255/kubeconfig"}}
	{"specversion":"1.0","id":"e9e4ebcb-4d92-42ed-ab8f-c913dc742027","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17967-971255/.minikube"}}
	{"specversion":"1.0","id":"d0d6cd71-0b5b-41d7-8ae5-3831ea755d24","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"dc9bde47-d7ca-43a8-ab08-b2cae914e339","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"d84343d4-f7a6-46b9-92eb-855c4fd09622","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-829753" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-829753
--- PASS: TestErrorJSONOutput (0.24s)

                                                
                                    
x
+
TestMainNoArgs (0.07s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.07s)

                                                
                                    
x
+
TestMinikubeProfile (103s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-428945 --driver=kvm2  --container-runtime=crio
E0116 02:19:50.169675  978482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/ingress-addon-legacy-473102/client.crt: no such file or directory
E0116 02:19:50.174982  978482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/ingress-addon-legacy-473102/client.crt: no such file or directory
E0116 02:19:50.185297  978482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/ingress-addon-legacy-473102/client.crt: no such file or directory
E0116 02:19:50.205633  978482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/ingress-addon-legacy-473102/client.crt: no such file or directory
E0116 02:19:50.245999  978482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/ingress-addon-legacy-473102/client.crt: no such file or directory
E0116 02:19:50.326456  978482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/ingress-addon-legacy-473102/client.crt: no such file or directory
E0116 02:19:50.486919  978482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/ingress-addon-legacy-473102/client.crt: no such file or directory
E0116 02:19:50.807585  978482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/ingress-addon-legacy-473102/client.crt: no such file or directory
E0116 02:19:51.448621  978482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/ingress-addon-legacy-473102/client.crt: no such file or directory
E0116 02:19:52.729538  978482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/ingress-addon-legacy-473102/client.crt: no such file or directory
E0116 02:19:55.290619  978482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/ingress-addon-legacy-473102/client.crt: no such file or directory
E0116 02:20:00.411467  978482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/ingress-addon-legacy-473102/client.crt: no such file or directory
E0116 02:20:10.652526  978482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/ingress-addon-legacy-473102/client.crt: no such file or directory
E0116 02:20:11.356456  978482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/functional-941139/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-428945 --driver=kvm2  --container-runtime=crio: (48.953578323s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-431947 --driver=kvm2  --container-runtime=crio
E0116 02:20:31.132850  978482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/ingress-addon-legacy-473102/client.crt: no such file or directory
E0116 02:21:12.093953  978482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/ingress-addon-legacy-473102/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-431947 --driver=kvm2  --container-runtime=crio: (51.230043317s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-428945
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-431947
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-431947" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-431947
helpers_test.go:175: Cleaning up "first-428945" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-428945
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-428945: (1.034202238s)
--- PASS: TestMinikubeProfile (103.00s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (28.64s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-715346 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-715346 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (27.640665835s)
--- PASS: TestMountStart/serial/StartWithMountFirst (28.64s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.44s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-715346 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-715346 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.44s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (28.1s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-732370 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-732370 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (27.096223221s)
--- PASS: TestMountStart/serial/StartWithMountSecond (28.10s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.42s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-732370 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-732370 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.42s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.92s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-715346 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.92s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.43s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-732370 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-732370 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.43s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.2s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-732370
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-732370: (1.202808225s)
--- PASS: TestMountStart/serial/Stop (1.20s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (25.24s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-732370
E0116 02:22:27.515208  978482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/functional-941139/client.crt: no such file or directory
E0116 02:22:34.015077  978482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/ingress-addon-legacy-473102/client.crt: no such file or directory
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-732370: (24.242099345s)
--- PASS: TestMountStart/serial/RestartStopped (25.24s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.46s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-732370 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-732370 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.46s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (115.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:86: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-835787 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0116 02:22:55.196764  978482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/functional-941139/client.crt: no such file or directory
E0116 02:23:12.495880  978482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/addons-321835/client.crt: no such file or directory
multinode_test.go:86: (dbg) Done: out/minikube-linux-amd64 start -p multinode-835787 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m54.649356166s)
multinode_test.go:92: (dbg) Run:  out/minikube-linux-amd64 -p multinode-835787 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (115.10s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:509: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-835787 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:514: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-835787 -- rollout status deployment/busybox
multinode_test.go:514: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-835787 -- rollout status deployment/busybox: (2.845634781s)
multinode_test.go:521: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-835787 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:544: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-835787 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-835787 -- exec busybox-5bc68d56bd-f6p29 -- nslookup kubernetes.io
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-835787 -- exec busybox-5bc68d56bd-hzzdv -- nslookup kubernetes.io
multinode_test.go:562: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-835787 -- exec busybox-5bc68d56bd-f6p29 -- nslookup kubernetes.default
multinode_test.go:562: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-835787 -- exec busybox-5bc68d56bd-hzzdv -- nslookup kubernetes.default
multinode_test.go:570: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-835787 -- exec busybox-5bc68d56bd-f6p29 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:570: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-835787 -- exec busybox-5bc68d56bd-hzzdv -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.82s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (42.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:111: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-835787 -v 3 --alsologtostderr
E0116 02:25:17.855872  978482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/ingress-addon-legacy-473102/client.crt: no such file or directory
multinode_test.go:111: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-835787 -v 3 --alsologtostderr: (41.701721425s)
multinode_test.go:117: (dbg) Run:  out/minikube-linux-amd64 -p multinode-835787 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (42.33s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:211: (dbg) Run:  kubectl --context multinode-835787 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:133: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.24s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (8.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:174: (dbg) Run:  out/minikube-linux-amd64 -p multinode-835787 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-835787 cp testdata/cp-test.txt multinode-835787:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-835787 ssh -n multinode-835787 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-835787 cp multinode-835787:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile13874096/001/cp-test_multinode-835787.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-835787 ssh -n multinode-835787 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-835787 cp multinode-835787:/home/docker/cp-test.txt multinode-835787-m02:/home/docker/cp-test_multinode-835787_multinode-835787-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-835787 ssh -n multinode-835787 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-835787 ssh -n multinode-835787-m02 "sudo cat /home/docker/cp-test_multinode-835787_multinode-835787-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-835787 cp multinode-835787:/home/docker/cp-test.txt multinode-835787-m03:/home/docker/cp-test_multinode-835787_multinode-835787-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-835787 ssh -n multinode-835787 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-835787 ssh -n multinode-835787-m03 "sudo cat /home/docker/cp-test_multinode-835787_multinode-835787-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-835787 cp testdata/cp-test.txt multinode-835787-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-835787 ssh -n multinode-835787-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-835787 cp multinode-835787-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile13874096/001/cp-test_multinode-835787-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-835787 ssh -n multinode-835787-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-835787 cp multinode-835787-m02:/home/docker/cp-test.txt multinode-835787:/home/docker/cp-test_multinode-835787-m02_multinode-835787.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-835787 ssh -n multinode-835787-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-835787 ssh -n multinode-835787 "sudo cat /home/docker/cp-test_multinode-835787-m02_multinode-835787.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-835787 cp multinode-835787-m02:/home/docker/cp-test.txt multinode-835787-m03:/home/docker/cp-test_multinode-835787-m02_multinode-835787-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-835787 ssh -n multinode-835787-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-835787 ssh -n multinode-835787-m03 "sudo cat /home/docker/cp-test_multinode-835787-m02_multinode-835787-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-835787 cp testdata/cp-test.txt multinode-835787-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-835787 ssh -n multinode-835787-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-835787 cp multinode-835787-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile13874096/001/cp-test_multinode-835787-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-835787 ssh -n multinode-835787-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-835787 cp multinode-835787-m03:/home/docker/cp-test.txt multinode-835787:/home/docker/cp-test_multinode-835787-m03_multinode-835787.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-835787 ssh -n multinode-835787-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-835787 ssh -n multinode-835787 "sudo cat /home/docker/cp-test_multinode-835787-m03_multinode-835787.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-835787 cp multinode-835787-m03:/home/docker/cp-test.txt multinode-835787-m02:/home/docker/cp-test_multinode-835787-m03_multinode-835787-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-835787 ssh -n multinode-835787-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-835787 ssh -n multinode-835787-m02 "sudo cat /home/docker/cp-test_multinode-835787-m03_multinode-835787-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (8.26s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (3.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:238: (dbg) Run:  out/minikube-linux-amd64 -p multinode-835787 node stop m03
multinode_test.go:238: (dbg) Done: out/minikube-linux-amd64 -p multinode-835787 node stop m03: (2.10625497s)
multinode_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p multinode-835787 status
multinode_test.go:244: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-835787 status: exit status 7 (482.414888ms)

                                                
                                                
-- stdout --
	multinode-835787
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-835787-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-835787-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:251: (dbg) Run:  out/minikube-linux-amd64 -p multinode-835787 status --alsologtostderr
multinode_test.go:251: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-835787 status --alsologtostderr: exit status 7 (478.968887ms)

                                                
                                                
-- stdout --
	multinode-835787
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-835787-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-835787-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0116 02:25:44.137240  994283 out.go:296] Setting OutFile to fd 1 ...
	I0116 02:25:44.137500  994283 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 02:25:44.137509  994283 out.go:309] Setting ErrFile to fd 2...
	I0116 02:25:44.137514  994283 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 02:25:44.137697  994283 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17967-971255/.minikube/bin
	I0116 02:25:44.137916  994283 out.go:303] Setting JSON to false
	I0116 02:25:44.137963  994283 mustload.go:65] Loading cluster: multinode-835787
	I0116 02:25:44.138070  994283 notify.go:220] Checking for updates...
	I0116 02:25:44.138460  994283 config.go:182] Loaded profile config "multinode-835787": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 02:25:44.138481  994283 status.go:255] checking status of multinode-835787 ...
	I0116 02:25:44.138974  994283 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 02:25:44.139065  994283 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:25:44.155152  994283 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46823
	I0116 02:25:44.155650  994283 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:25:44.156248  994283 main.go:141] libmachine: Using API Version  1
	I0116 02:25:44.156279  994283 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:25:44.156731  994283 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:25:44.156956  994283 main.go:141] libmachine: (multinode-835787) Calling .GetState
	I0116 02:25:44.158643  994283 status.go:330] multinode-835787 host status = "Running" (err=<nil>)
	I0116 02:25:44.158668  994283 host.go:66] Checking if "multinode-835787" exists ...
	I0116 02:25:44.159098  994283 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 02:25:44.159149  994283 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:25:44.175349  994283 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33931
	I0116 02:25:44.175809  994283 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:25:44.176318  994283 main.go:141] libmachine: Using API Version  1
	I0116 02:25:44.176343  994283 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:25:44.176737  994283 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:25:44.176936  994283 main.go:141] libmachine: (multinode-835787) Calling .GetIP
	I0116 02:25:44.179957  994283 main.go:141] libmachine: (multinode-835787) DBG | domain multinode-835787 has defined MAC address 52:54:00:20:87:3c in network mk-multinode-835787
	I0116 02:25:44.180372  994283 main.go:141] libmachine: (multinode-835787) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:87:3c", ip: ""} in network mk-multinode-835787: {Iface:virbr1 ExpiryTime:2024-01-16 03:23:03 +0000 UTC Type:0 Mac:52:54:00:20:87:3c Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:multinode-835787 Clientid:01:52:54:00:20:87:3c}
	I0116 02:25:44.180408  994283 main.go:141] libmachine: (multinode-835787) DBG | domain multinode-835787 has defined IP address 192.168.39.50 and MAC address 52:54:00:20:87:3c in network mk-multinode-835787
	I0116 02:25:44.180510  994283 host.go:66] Checking if "multinode-835787" exists ...
	I0116 02:25:44.180818  994283 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 02:25:44.180861  994283 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:25:44.197486  994283 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45449
	I0116 02:25:44.197992  994283 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:25:44.198555  994283 main.go:141] libmachine: Using API Version  1
	I0116 02:25:44.198580  994283 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:25:44.198962  994283 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:25:44.199175  994283 main.go:141] libmachine: (multinode-835787) Calling .DriverName
	I0116 02:25:44.199416  994283 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0116 02:25:44.199442  994283 main.go:141] libmachine: (multinode-835787) Calling .GetSSHHostname
	I0116 02:25:44.202498  994283 main.go:141] libmachine: (multinode-835787) DBG | domain multinode-835787 has defined MAC address 52:54:00:20:87:3c in network mk-multinode-835787
	I0116 02:25:44.202900  994283 main.go:141] libmachine: (multinode-835787) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:87:3c", ip: ""} in network mk-multinode-835787: {Iface:virbr1 ExpiryTime:2024-01-16 03:23:03 +0000 UTC Type:0 Mac:52:54:00:20:87:3c Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:multinode-835787 Clientid:01:52:54:00:20:87:3c}
	I0116 02:25:44.202938  994283 main.go:141] libmachine: (multinode-835787) DBG | domain multinode-835787 has defined IP address 192.168.39.50 and MAC address 52:54:00:20:87:3c in network mk-multinode-835787
	I0116 02:25:44.203000  994283 main.go:141] libmachine: (multinode-835787) Calling .GetSSHPort
	I0116 02:25:44.203202  994283 main.go:141] libmachine: (multinode-835787) Calling .GetSSHKeyPath
	I0116 02:25:44.203359  994283 main.go:141] libmachine: (multinode-835787) Calling .GetSSHUsername
	I0116 02:25:44.203494  994283 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/multinode-835787/id_rsa Username:docker}
	I0116 02:25:44.294299  994283 ssh_runner.go:195] Run: systemctl --version
	I0116 02:25:44.301401  994283 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 02:25:44.318129  994283 kubeconfig.go:92] found "multinode-835787" server: "https://192.168.39.50:8443"
	I0116 02:25:44.318167  994283 api_server.go:166] Checking apiserver status ...
	I0116 02:25:44.318221  994283 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 02:25:44.334883  994283 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1131/cgroup
	I0116 02:25:44.346758  994283 api_server.go:182] apiserver freezer: "9:freezer:/kubepods/burstable/podb27880b6b81ca11dc023b4901941ff6f/crio-d898998193986881c6e265f064f078dc716114d2642e7c9b13934a85d0cb4139"
	I0116 02:25:44.346844  994283 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/podb27880b6b81ca11dc023b4901941ff6f/crio-d898998193986881c6e265f064f078dc716114d2642e7c9b13934a85d0cb4139/freezer.state
	I0116 02:25:44.359193  994283 api_server.go:204] freezer state: "THAWED"
	I0116 02:25:44.359227  994283 api_server.go:253] Checking apiserver healthz at https://192.168.39.50:8443/healthz ...
	I0116 02:25:44.364784  994283 api_server.go:279] https://192.168.39.50:8443/healthz returned 200:
	ok
	I0116 02:25:44.364826  994283 status.go:421] multinode-835787 apiserver status = Running (err=<nil>)
	I0116 02:25:44.364846  994283 status.go:257] multinode-835787 status: &{Name:multinode-835787 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0116 02:25:44.364884  994283 status.go:255] checking status of multinode-835787-m02 ...
	I0116 02:25:44.365321  994283 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 02:25:44.365376  994283 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:25:44.381065  994283 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38433
	I0116 02:25:44.381655  994283 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:25:44.382215  994283 main.go:141] libmachine: Using API Version  1
	I0116 02:25:44.382241  994283 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:25:44.382652  994283 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:25:44.382832  994283 main.go:141] libmachine: (multinode-835787-m02) Calling .GetState
	I0116 02:25:44.384374  994283 status.go:330] multinode-835787-m02 host status = "Running" (err=<nil>)
	I0116 02:25:44.384396  994283 host.go:66] Checking if "multinode-835787-m02" exists ...
	I0116 02:25:44.384813  994283 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 02:25:44.384869  994283 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:25:44.400648  994283 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34161
	I0116 02:25:44.401183  994283 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:25:44.401638  994283 main.go:141] libmachine: Using API Version  1
	I0116 02:25:44.401661  994283 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:25:44.402043  994283 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:25:44.402218  994283 main.go:141] libmachine: (multinode-835787-m02) Calling .GetIP
	I0116 02:25:44.404979  994283 main.go:141] libmachine: (multinode-835787-m02) DBG | domain multinode-835787-m02 has defined MAC address 52:54:00:83:d4:5b in network mk-multinode-835787
	I0116 02:25:44.405386  994283 main.go:141] libmachine: (multinode-835787-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:d4:5b", ip: ""} in network mk-multinode-835787: {Iface:virbr1 ExpiryTime:2024-01-16 03:24:11 +0000 UTC Type:0 Mac:52:54:00:83:d4:5b Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:multinode-835787-m02 Clientid:01:52:54:00:83:d4:5b}
	I0116 02:25:44.405422  994283 main.go:141] libmachine: (multinode-835787-m02) DBG | domain multinode-835787-m02 has defined IP address 192.168.39.15 and MAC address 52:54:00:83:d4:5b in network mk-multinode-835787
	I0116 02:25:44.405615  994283 host.go:66] Checking if "multinode-835787-m02" exists ...
	I0116 02:25:44.406012  994283 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 02:25:44.406053  994283 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:25:44.421721  994283 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34533
	I0116 02:25:44.422334  994283 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:25:44.423829  994283 main.go:141] libmachine: Using API Version  1
	I0116 02:25:44.423862  994283 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:25:44.424280  994283 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:25:44.424486  994283 main.go:141] libmachine: (multinode-835787-m02) Calling .DriverName
	I0116 02:25:44.424701  994283 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0116 02:25:44.424726  994283 main.go:141] libmachine: (multinode-835787-m02) Calling .GetSSHHostname
	I0116 02:25:44.427670  994283 main.go:141] libmachine: (multinode-835787-m02) DBG | domain multinode-835787-m02 has defined MAC address 52:54:00:83:d4:5b in network mk-multinode-835787
	I0116 02:25:44.428063  994283 main.go:141] libmachine: (multinode-835787-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:d4:5b", ip: ""} in network mk-multinode-835787: {Iface:virbr1 ExpiryTime:2024-01-16 03:24:11 +0000 UTC Type:0 Mac:52:54:00:83:d4:5b Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:multinode-835787-m02 Clientid:01:52:54:00:83:d4:5b}
	I0116 02:25:44.428097  994283 main.go:141] libmachine: (multinode-835787-m02) DBG | domain multinode-835787-m02 has defined IP address 192.168.39.15 and MAC address 52:54:00:83:d4:5b in network mk-multinode-835787
	I0116 02:25:44.428242  994283 main.go:141] libmachine: (multinode-835787-m02) Calling .GetSSHPort
	I0116 02:25:44.428431  994283 main.go:141] libmachine: (multinode-835787-m02) Calling .GetSSHKeyPath
	I0116 02:25:44.428559  994283 main.go:141] libmachine: (multinode-835787-m02) Calling .GetSSHUsername
	I0116 02:25:44.428670  994283 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-971255/.minikube/machines/multinode-835787-m02/id_rsa Username:docker}
	I0116 02:25:44.521299  994283 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 02:25:44.534143  994283 status.go:257] multinode-835787-m02 status: &{Name:multinode-835787-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0116 02:25:44.534194  994283 status.go:255] checking status of multinode-835787-m03 ...
	I0116 02:25:44.534541  994283 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 02:25:44.534597  994283 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:25:44.549749  994283 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45357
	I0116 02:25:44.550232  994283 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:25:44.550828  994283 main.go:141] libmachine: Using API Version  1
	I0116 02:25:44.550867  994283 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:25:44.551203  994283 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:25:44.551444  994283 main.go:141] libmachine: (multinode-835787-m03) Calling .GetState
	I0116 02:25:44.553275  994283 status.go:330] multinode-835787-m03 host status = "Stopped" (err=<nil>)
	I0116 02:25:44.553293  994283 status.go:343] host is not running, skipping remaining checks
	I0116 02:25:44.553298  994283 status.go:257] multinode-835787-m03 status: &{Name:multinode-835787-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (3.07s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (29.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-835787 node start m03 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-835787 node start m03 --alsologtostderr: (28.518296013s)
multinode_test.go:289: (dbg) Run:  out/minikube-linux-amd64 -p multinode-835787 status
multinode_test.go:303: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (29.21s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (1.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-835787 node delete m03
multinode_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p multinode-835787 node delete m03: (1.248641733s)
multinode_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p multinode-835787 status --alsologtostderr
multinode_test.go:452: (dbg) Run:  kubectl get nodes
multinode_test.go:460: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (1.84s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (441.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-835787 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0116 02:42:27.515583  978482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/functional-941139/client.crt: no such file or directory
E0116 02:43:12.495652  978482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/addons-321835/client.crt: no such file or directory
E0116 02:44:50.169756  978482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/ingress-addon-legacy-473102/client.crt: no such file or directory
E0116 02:46:15.545280  978482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/addons-321835/client.crt: no such file or directory
E0116 02:47:27.513349  978482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/functional-941139/client.crt: no such file or directory
multinode_test.go:382: (dbg) Done: out/minikube-linux-amd64 start -p multinode-835787 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (7m21.25308887s)
multinode_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p multinode-835787 status --alsologtostderr
multinode_test.go:402: (dbg) Run:  kubectl get nodes
multinode_test.go:410: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (441.84s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (53.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:471: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-835787
multinode_test.go:480: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-835787-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:480: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-835787-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (82.908493ms)

                                                
                                                
-- stdout --
	* [multinode-835787-m02] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17967
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17967-971255/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17967-971255/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-835787-m02' is duplicated with machine name 'multinode-835787-m02' in profile 'multinode-835787'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:488: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-835787-m03 --driver=kvm2  --container-runtime=crio
E0116 02:48:12.496247  978482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/addons-321835/client.crt: no such file or directory
multinode_test.go:488: (dbg) Done: out/minikube-linux-amd64 start -p multinode-835787-m03 --driver=kvm2  --container-runtime=crio: (52.387041266s)
multinode_test.go:495: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-835787
multinode_test.go:495: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-835787: exit status 80 (247.297032ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-835787
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-835787-m03 already exists in multinode-835787-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:500: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-835787-m03
multinode_test.go:500: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-835787-m03: (1.022925911s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (53.80s)

                                                
                                    
x
+
TestScheduledStopUnix (119.12s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-890422 --memory=2048 --driver=kvm2  --container-runtime=crio
E0116 02:53:12.495358  978482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/addons-321835/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-890422 --memory=2048 --driver=kvm2  --container-runtime=crio: (47.166569026s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-890422 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-890422 -n scheduled-stop-890422
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-890422 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-890422 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-890422 -n scheduled-stop-890422
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-890422
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-890422 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0116 02:54:50.169589  978482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/ingress-addon-legacy-473102/client.crt: no such file or directory
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-890422
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-890422: exit status 7 (87.538267ms)

                                                
                                                
-- stdout --
	scheduled-stop-890422
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-890422 -n scheduled-stop-890422
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-890422 -n scheduled-stop-890422: exit status 7 (87.045918ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-890422" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-890422
--- PASS: TestScheduledStopUnix (119.12s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (226.26s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.1409358700 start -p running-upgrade-221352 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.1409358700 start -p running-upgrade-221352 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m9.206364826s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-221352 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E0116 02:57:27.512941  978482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/functional-941139/client.crt: no such file or directory
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-221352 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m34.974225249s)
helpers_test.go:175: Cleaning up "running-upgrade-221352" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-221352
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-221352: (1.457361647s)
--- PASS: TestRunningBinaryUpgrade (226.26s)

                                                
                                    
x
+
TestKubernetesUpgrade (217.49s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-396556 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-396556 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m25.625318644s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-396556
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-396556: (6.184247096s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-396556 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-396556 status --format={{.Host}}: exit status 7 (87.208918ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-396556 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-396556 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (48.430341932s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-396556 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-396556 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-396556 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2  --container-runtime=crio: exit status 106 (119.327948ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-396556] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17967
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17967-971255/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17967-971255/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.29.0-rc.2 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-396556
	    minikube start -p kubernetes-upgrade-396556 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-3965562 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.29.0-rc.2, by running:
	    
	    minikube start -p kubernetes-upgrade-396556 --kubernetes-version=v1.29.0-rc.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-396556 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-396556 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m15.84892232s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-396556" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-396556
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-396556: (1.115161479s)
--- PASS: TestKubernetesUpgrade (217.49s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-204843 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-204843 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (102.729145ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-204843] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17967
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17967-971255/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17967-971255/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (98.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-204843 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-204843 --driver=kvm2  --container-runtime=crio: (1m37.947973146s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-204843 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (98.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (41.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-204843 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-204843 --no-kubernetes --driver=kvm2  --container-runtime=crio: (39.846780056s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-204843 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-204843 status -o json: exit status 2 (271.44521ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-204843","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-204843
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-204843: (1.071401728s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (41.19s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (57.65s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-204843 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-204843 --no-kubernetes --driver=kvm2  --container-runtime=crio: (57.646955398s)
--- PASS: TestNoKubernetes/serial/Start (57.65s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.25s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-204843 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-204843 "sudo systemctl is-active --quiet service kubelet": exit status 1 (249.448859ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.25s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (26.44s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (12.42144855s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (14.016740827s)
--- PASS: TestNoKubernetes/serial/ProfileList (26.44s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-204843
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-204843: (1.342167734s)
--- PASS: TestNoKubernetes/serial/Stop (1.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (22.56s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-204843 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-204843 --driver=kvm2  --container-runtime=crio: (22.563974547s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (22.56s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-204843 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-204843 "sudo systemctl is-active --quiet service kubelet": exit status 1 (229.598849ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.23s)

                                                
                                    
x
+
TestPause/serial/Start (125.62s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-100619 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-100619 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (2m5.621906181s)
--- PASS: TestPause/serial/Start (125.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-278325 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-278325 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (126.33518ms)

                                                
                                                
-- stdout --
	* [false-278325] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17967
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17967-971255/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17967-971255/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0116 02:59:22.910513 1005507 out.go:296] Setting OutFile to fd 1 ...
	I0116 02:59:22.910698 1005507 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 02:59:22.910709 1005507 out.go:309] Setting ErrFile to fd 2...
	I0116 02:59:22.910716 1005507 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 02:59:22.910938 1005507 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17967-971255/.minikube/bin
	I0116 02:59:22.911550 1005507 out.go:303] Setting JSON to false
	I0116 02:59:22.912641 1005507 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":13312,"bootTime":1705360651,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1048-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0116 02:59:22.912740 1005507 start.go:138] virtualization: kvm guest
	I0116 02:59:22.915045 1005507 out.go:177] * [false-278325] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0116 02:59:22.916649 1005507 out.go:177]   - MINIKUBE_LOCATION=17967
	I0116 02:59:22.916720 1005507 notify.go:220] Checking for updates...
	I0116 02:59:22.918087 1005507 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0116 02:59:22.919750 1005507 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17967-971255/kubeconfig
	I0116 02:59:22.921199 1005507 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17967-971255/.minikube
	I0116 02:59:22.922645 1005507 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0116 02:59:22.924062 1005507 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0116 02:59:22.926136 1005507 config.go:182] Loaded profile config "cert-expiration-920153": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 02:59:22.926308 1005507 config.go:182] Loaded profile config "kubernetes-upgrade-396556": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0116 02:59:22.926450 1005507 config.go:182] Loaded profile config "pause-100619": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 02:59:22.926568 1005507 driver.go:392] Setting default libvirt URI to qemu:///system
	I0116 02:59:22.965894 1005507 out.go:177] * Using the kvm2 driver based on user configuration
	I0116 02:59:22.967304 1005507 start.go:298] selected driver: kvm2
	I0116 02:59:22.967322 1005507 start.go:902] validating driver "kvm2" against <nil>
	I0116 02:59:22.967335 1005507 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0116 02:59:22.969642 1005507 out.go:177] 
	W0116 02:59:22.971080 1005507 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0116 02:59:22.972435 1005507 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-278325 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-278325

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-278325

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-278325

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-278325

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-278325

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-278325

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-278325

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-278325

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-278325

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-278325

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-278325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-278325"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-278325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-278325"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-278325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-278325"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-278325

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-278325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-278325"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-278325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-278325"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-278325" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-278325" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-278325" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-278325" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-278325" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-278325" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-278325" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-278325" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-278325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-278325"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-278325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-278325"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-278325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-278325"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-278325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-278325"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-278325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-278325"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-278325" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-278325" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-278325" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-278325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-278325"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-278325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-278325"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-278325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-278325"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-278325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-278325"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-278325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-278325"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17967-971255/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 16 Jan 2024 02:58:17 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.50.96:8443
name: cert-expiration-920153
contexts:
- context:
cluster: cert-expiration-920153
extensions:
- extension:
last-update: Tue, 16 Jan 2024 02:58:17 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: cert-expiration-920153
name: cert-expiration-920153
current-context: ""
kind: Config
preferences: {}
users:
- name: cert-expiration-920153
user:
client-certificate: /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/cert-expiration-920153/client.crt
client-key: /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/cert-expiration-920153/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-278325

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-278325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-278325"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-278325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-278325"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-278325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-278325"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-278325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-278325"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-278325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-278325"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-278325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-278325"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-278325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-278325"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-278325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-278325"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-278325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-278325"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-278325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-278325"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-278325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-278325"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-278325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-278325"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-278325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-278325"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-278325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-278325"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-278325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-278325"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-278325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-278325"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-278325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-278325"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-278325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-278325"

                                                
                                                
----------------------- debugLogs end: false-278325 [took: 3.568928941s] --------------------------------
helpers_test.go:175: Cleaning up "false-278325" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-278325
--- PASS: TestNetworkPlugins/group/false (3.87s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.58s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.58s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (145.25s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.3612286170 start -p stopped-upgrade-583997 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
E0116 02:59:50.170103  978482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/ingress-addon-legacy-473102/client.crt: no such file or directory
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.3612286170 start -p stopped-upgrade-583997 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m34.533936703s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.3612286170 -p stopped-upgrade-583997 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.3612286170 -p stopped-upgrade-583997 stop: (2.142310822s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-583997 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-583997 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (48.568896644s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (145.25s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (140.8s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-100619 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-100619 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (2m20.779855328s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (140.80s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.97s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-583997
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.97s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (218.75s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-788237 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0
E0116 03:02:27.513378  978482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/functional-941139/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-788237 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0: (3m38.751492621s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (218.75s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (156.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-934668 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
E0116 03:02:55.545882  978482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/addons-321835/client.crt: no such file or directory
E0116 03:03:12.495693  978482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/addons-321835/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-934668 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (2m36.027550501s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (156.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (106.62s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-480663 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-480663 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4: (1m46.622695869s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (106.62s)

                                                
                                    
x
+
TestPause/serial/Pause (0.8s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-100619 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.80s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.29s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-100619 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-100619 --output=json --layout=cluster: exit status 2 (293.36598ms)

                                                
                                                
-- stdout --
	{"Name":"pause-100619","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-100619","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.29s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.81s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-100619 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.81s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.27s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-100619 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-amd64 pause -p pause-100619 --alsologtostderr -v=5: (1.274580549s)
--- PASS: TestPause/serial/PauseAgain (1.27s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.12s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-100619 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-100619 --alsologtostderr -v=5: (1.117575546s)
--- PASS: TestPause/serial/DeletePaused (1.12s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.25s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (144.93s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-775571 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4
E0116 03:04:50.170265  978482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/ingress-addon-legacy-473102/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-775571 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4: (2m24.929286682s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (144.93s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.4s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-934668 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [5fafe925-a5f3-4bfe-ae9d-205fe5ac1089] Pending
helpers_test.go:344: "busybox" [5fafe925-a5f3-4bfe-ae9d-205fe5ac1089] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [5fafe925-a5f3-4bfe-ae9d-205fe5ac1089] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.009375093s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-934668 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.40s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.4s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-480663 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [b7ad1e22-9448-44d8-aee0-5170d264d3f6] Pending
helpers_test.go:344: "busybox" [b7ad1e22-9448-44d8-aee0-5170d264d3f6] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [b7ad1e22-9448-44d8-aee0-5170d264d3f6] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.005981546s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-480663 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.40s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-934668 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-934668 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.113169682s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-934668 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.36s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-480663 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-480663 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.265071493s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-480663 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.36s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.48s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-788237 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [2575c415-a1b3-46a7-883f-75480c74784e] Pending
helpers_test.go:344: "busybox" [2575c415-a1b3-46a7-883f-75480c74784e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [2575c415-a1b3-46a7-883f-75480c74784e] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.005190876s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-788237 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.48s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-788237 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-788237 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.33s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-775571 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [4e370e2d-2c7e-4c5e-8f45-016e0c7a22fe] Pending
helpers_test.go:344: "busybox" [4e370e2d-2c7e-4c5e-8f45-016e0c7a22fe] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [4e370e2d-2c7e-4c5e-8f45-016e0c7a22fe] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.00497126s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-775571 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.33s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-775571 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-775571 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.110924569s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-775571 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (972.83s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-934668 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-934668 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (16m12.533623114s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-934668 -n no-preload-934668
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (972.83s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (586.83s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-480663 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-480663 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4: (9m46.486285421s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-480663 -n embed-certs-480663
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (586.83s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (705.62s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-788237 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-788237 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0: (11m45.316471756s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-788237 -n old-k8s-version-788237
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (705.62s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (876.73s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-775571 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4
E0116 03:09:33.217677  978482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/ingress-addon-legacy-473102/client.crt: no such file or directory
E0116 03:09:50.170234  978482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/ingress-addon-legacy-473102/client.crt: no such file or directory
E0116 03:12:27.513360  978482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/functional-941139/client.crt: no such file or directory
E0116 03:13:12.495731  978482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/addons-321835/client.crt: no such file or directory
E0116 03:14:50.170550  978482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/ingress-addon-legacy-473102/client.crt: no such file or directory
E0116 03:17:27.513054  978482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/functional-941139/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-775571 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4: (14m36.414045337s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-775571 -n default-k8s-diff-port-775571
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (876.73s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (65.04s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-190843 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
E0116 03:32:27.513007  978482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/functional-941139/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-190843 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (1m5.04209153s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (65.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (108.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-278325 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-278325 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m48.494636492s)
--- PASS: TestNetworkPlugins/group/auto/Start (108.49s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.8s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-190843 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-190843 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.795079243s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.80s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (4.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-190843 --alsologtostderr -v=3
E0116 03:33:12.496372  978482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/addons-321835/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-190843 --alsologtostderr -v=3: (4.21447234s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (4.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-190843 -n newest-cni-190843
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-190843 -n newest-cni-190843: exit status 7 (101.957469ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-190843 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (51.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-190843 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-190843 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (50.794315254s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-190843 -n newest-cni-190843
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (51.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (85.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-278325 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-278325 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m25.189340084s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (85.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (112.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-278325 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-278325 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m52.770728422s)
--- PASS: TestNetworkPlugins/group/calico/Start (112.77s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-190843 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-190843 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-190843 -n newest-cni-190843
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-190843 -n newest-cni-190843: exit status 2 (331.110926ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-190843 -n newest-cni-190843
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-190843 -n newest-cni-190843: exit status 2 (348.184414ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-190843 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-190843 -n newest-cni-190843
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-190843 -n newest-cni-190843
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (111.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-278325 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-278325 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m51.846890156s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (111.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-278325 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-278325 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-vzdsv" [d0f18783-ecb7-4821-b745-d34952964df9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-vzdsv" [d0f18783-ecb7-4821-b745-d34952964df9] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.006630404s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-278325 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-278325 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-278325 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-758fm" [ed6df438-5215-4406-a586-aae10a99193e] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.005806645s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (112.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-278325 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-278325 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m52.435417285s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (112.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-278325 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (13.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-278325 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-4b8jq" [afdd7f07-6256-4ded-be94-12677b80463d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-4b8jq" [afdd7f07-6256-4ded-be94-12677b80463d] Running
E0116 03:35:09.473439  978482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/no-preload-934668/client.crt: no such file or directory
E0116 03:35:09.478902  978482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/no-preload-934668/client.crt: no such file or directory
E0116 03:35:09.489088  978482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/no-preload-934668/client.crt: no such file or directory
E0116 03:35:09.509546  978482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/no-preload-934668/client.crt: no such file or directory
E0116 03:35:09.550677  978482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/no-preload-934668/client.crt: no such file or directory
E0116 03:35:09.631051  978482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/no-preload-934668/client.crt: no such file or directory
E0116 03:35:09.791449  978482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/no-preload-934668/client.crt: no such file or directory
E0116 03:35:10.112186  978482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/no-preload-934668/client.crt: no such file or directory
E0116 03:35:10.753114  978482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/no-preload-934668/client.crt: no such file or directory
E0116 03:35:12.033769  978482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/no-preload-934668/client.crt: no such file or directory
E0116 03:35:14.594773  978482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/no-preload-934668/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 13.004936102s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (13.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-278325 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-278325 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-278325 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (92.68s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-278325 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
E0116 03:35:39.496303  978482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/old-k8s-version-788237/client.crt: no such file or directory
E0116 03:35:39.501653  978482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/old-k8s-version-788237/client.crt: no such file or directory
E0116 03:35:39.512003  978482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/old-k8s-version-788237/client.crt: no such file or directory
E0116 03:35:39.532377  978482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/old-k8s-version-788237/client.crt: no such file or directory
E0116 03:35:39.572738  978482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/old-k8s-version-788237/client.crt: no such file or directory
E0116 03:35:39.653891  978482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/old-k8s-version-788237/client.crt: no such file or directory
E0116 03:35:39.814437  978482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/old-k8s-version-788237/client.crt: no such file or directory
E0116 03:35:40.135033  978482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/old-k8s-version-788237/client.crt: no such file or directory
E0116 03:35:40.775304  978482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/old-k8s-version-788237/client.crt: no such file or directory
E0116 03:35:42.055533  978482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/old-k8s-version-788237/client.crt: no such file or directory
E0116 03:35:44.616385  978482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/old-k8s-version-788237/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-278325 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m32.682733926s)
--- PASS: TestNetworkPlugins/group/flannel/Start (92.68s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-lwljv" [7e0ad89d-df24-4f8b-bde0-25d3663e1e8e] Running
E0116 03:35:49.737197  978482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/old-k8s-version-788237/client.crt: no such file or directory
E0116 03:35:50.436626  978482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/no-preload-934668/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.007812757s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-278325 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-278325 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-kljgx" [a76a5ec0-30f9-4560-b419-a12a7da1a018] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-kljgx" [a76a5ec0-30f9-4560-b419-a12a7da1a018] Running
E0116 03:35:59.977940  978482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/old-k8s-version-788237/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.006182759s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-278325 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-278325 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-278325 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-278325 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (13.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-278325 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-662c6" [2c060f12-5b0e-4d41-8a49-539bff35937f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-662c6" [2c060f12-5b0e-4d41-8a49-539bff35937f] Running
E0116 03:36:14.836151  978482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/default-k8s-diff-port-775571/client.crt: no such file or directory
E0116 03:36:14.841472  978482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/default-k8s-diff-port-775571/client.crt: no such file or directory
E0116 03:36:14.851756  978482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/default-k8s-diff-port-775571/client.crt: no such file or directory
E0116 03:36:14.872057  978482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/default-k8s-diff-port-775571/client.crt: no such file or directory
E0116 03:36:14.912388  978482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/default-k8s-diff-port-775571/client.crt: no such file or directory
E0116 03:36:14.992902  978482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/default-k8s-diff-port-775571/client.crt: no such file or directory
E0116 03:36:15.153786  978482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/default-k8s-diff-port-775571/client.crt: no such file or directory
E0116 03:36:15.474692  978482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/default-k8s-diff-port-775571/client.crt: no such file or directory
E0116 03:36:15.547628  978482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/addons-321835/client.crt: no such file or directory
E0116 03:36:16.115884  978482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/default-k8s-diff-port-775571/client.crt: no such file or directory
E0116 03:36:17.396713  978482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/default-k8s-diff-port-775571/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 13.005246682s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (13.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-278325 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-278325 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-278325 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (105.07s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-278325 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
E0116 03:36:25.078316  978482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/default-k8s-diff-port-775571/client.crt: no such file or directory
E0116 03:36:31.397333  978482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/no-preload-934668/client.crt: no such file or directory
E0116 03:36:35.319070  978482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/default-k8s-diff-port-775571/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-278325 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m45.071297245s)
--- PASS: TestNetworkPlugins/group/bridge/Start (105.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-278325 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-278325 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-ms4p8" [3c1383df-ecb5-4078-b881-5ba0d4d08584] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-ms4p8" [3c1383df-ecb5-4078-b881-5ba0d4d08584] Running
E0116 03:36:55.799441  978482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/default-k8s-diff-port-775571/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.083700828s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-278325 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-278325 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-278325 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-gjqhb" [4952a81e-73c5-439d-b043-6c9ee9756fe1] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.005865708s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-278325 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (12.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-278325 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-7zph4" [5164e15e-0347-4d18-afef-b2eec3b27a64] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-7zph4" [5164e15e-0347-4d18-afef-b2eec3b27a64] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 12.005196493s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (12.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-278325 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-278325 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-278325 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-278325 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (13.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-278325 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-tvhn6" [78c229fa-c6bc-4e84-8be2-10869773b7d2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0116 03:38:12.496363  978482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/addons-321835/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-tvhn6" [78c229fa-c6bc-4e84-8be2-10869773b7d2] Running
E0116 03:38:23.340296  978482 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/old-k8s-version-788237/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 13.005435105s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (13.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-278325 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-278325 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-278325 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                    

Test skip (39/309)

Order skiped test Duration
5 TestDownloadOnly/v1.16.0/cached-images 0
6 TestDownloadOnly/v1.16.0/binaries 0
7 TestDownloadOnly/v1.16.0/kubectl 0
14 TestDownloadOnly/v1.28.4/cached-images 0
15 TestDownloadOnly/v1.28.4/binaries 0
16 TestDownloadOnly/v1.28.4/kubectl 0
23 TestDownloadOnly/v1.29.0-rc.2/cached-images 0
24 TestDownloadOnly/v1.29.0-rc.2/binaries 0
25 TestDownloadOnly/v1.29.0-rc.2/kubectl 0
29 TestDownloadOnlyKic 0
43 TestAddons/parallel/Olm 0
56 TestDockerFlags 0
59 TestDockerEnvContainerd 0
61 TestHyperKitDriverInstallOrUpdate 0
62 TestHyperkitDriverSkipUpgrade 0
113 TestFunctional/parallel/DockerEnv 0
114 TestFunctional/parallel/PodmanEnv 0
131 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
132 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
133 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
134 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
135 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
136 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
137 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
138 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
162 TestGvisorAddon 0
163 TestImageBuild 0
196 TestKicCustomNetwork 0
197 TestKicExistingNetwork 0
198 TestKicCustomSubnet 0
199 TestKicStaticIP 0
231 TestChangeNoneUser 0
234 TestScheduledStopWindows 0
236 TestSkaffold 0
238 TestInsufficientStorage 0
242 TestMissingContainerUpgrade 0
260 TestStartStop/group/disable-driver-mounts 0.2
264 TestNetworkPlugins/group/kubenet 3.73
272 TestNetworkPlugins/group/cilium 4.33
x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-807979" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-807979
--- SKIP: TestStartStop/group/disable-driver-mounts (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:523: 
----------------------- debugLogs start: kubenet-278325 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-278325

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-278325

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-278325

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-278325

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-278325

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-278325

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-278325

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-278325

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-278325

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-278325

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-278325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-278325"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-278325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-278325"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-278325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-278325"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-278325

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-278325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-278325"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-278325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-278325"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-278325" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-278325" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-278325" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-278325" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-278325" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-278325" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-278325" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-278325" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-278325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-278325"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-278325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-278325"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-278325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-278325"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-278325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-278325"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-278325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-278325"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-278325" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-278325" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-278325" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-278325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-278325"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-278325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-278325"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-278325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-278325"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-278325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-278325"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-278325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-278325"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17967-971255/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 16 Jan 2024 02:58:17 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.50.96:8443
name: cert-expiration-920153
contexts:
- context:
cluster: cert-expiration-920153
extensions:
- extension:
last-update: Tue, 16 Jan 2024 02:58:17 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: cert-expiration-920153
name: cert-expiration-920153
current-context: ""
kind: Config
preferences: {}
users:
- name: cert-expiration-920153
user:
client-certificate: /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/cert-expiration-920153/client.crt
client-key: /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/cert-expiration-920153/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-278325

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-278325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-278325"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-278325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-278325"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-278325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-278325"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-278325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-278325"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-278325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-278325"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-278325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-278325"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-278325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-278325"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-278325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-278325"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-278325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-278325"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-278325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-278325"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-278325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-278325"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-278325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-278325"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-278325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-278325"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-278325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-278325"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-278325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-278325"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-278325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-278325"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-278325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-278325"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-278325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-278325"

                                                
                                                
----------------------- debugLogs end: kubenet-278325 [took: 3.555214228s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-278325" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-278325
--- SKIP: TestNetworkPlugins/group/kubenet (3.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-278325 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-278325

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-278325

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-278325

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-278325

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-278325

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-278325

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-278325

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-278325

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-278325

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-278325

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-278325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-278325"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-278325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-278325"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-278325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-278325"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-278325

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-278325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-278325"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-278325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-278325"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-278325" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-278325" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-278325" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-278325" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-278325" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-278325" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-278325" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-278325" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-278325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-278325"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-278325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-278325"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-278325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-278325"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-278325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-278325"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-278325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-278325"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-278325

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-278325

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-278325" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-278325" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-278325

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-278325

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-278325" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-278325" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-278325" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-278325" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-278325" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-278325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-278325"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-278325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-278325"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-278325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-278325"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-278325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-278325"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-278325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-278325"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17967-971255/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 16 Jan 2024 02:58:17 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.50.96:8443
name: cert-expiration-920153
contexts:
- context:
cluster: cert-expiration-920153
extensions:
- extension:
last-update: Tue, 16 Jan 2024 02:58:17 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: cert-expiration-920153
name: cert-expiration-920153
current-context: ""
kind: Config
preferences: {}
users:
- name: cert-expiration-920153
user:
client-certificate: /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/cert-expiration-920153/client.crt
client-key: /home/jenkins/minikube-integration/17967-971255/.minikube/profiles/cert-expiration-920153/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-278325

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-278325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-278325"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-278325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-278325"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-278325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-278325"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-278325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-278325"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-278325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-278325"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-278325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-278325"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-278325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-278325"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-278325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-278325"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-278325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-278325"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-278325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-278325"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-278325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-278325"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-278325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-278325"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-278325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-278325"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-278325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-278325"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-278325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-278325"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-278325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-278325"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-278325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-278325"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-278325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-278325"

                                                
                                                
----------------------- debugLogs end: cilium-278325 [took: 4.14534645s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-278325" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-278325
--- SKIP: TestNetworkPlugins/group/cilium (4.33s)

                                                
                                    
Copied to clipboard